Operationalising Moral Foundations Theory
In order to use Moral Foundations Theory to identify and explain cultural differences, we need a way to measure individual variations in how moral judgements are made. The Moral Foundations Questionnaire aims to fulfill this need.
By the end of this section you should know what the Moral Foundations Questionnaire is and how attempts have been made to validate it. You should also be aware of some objections to its use as a tool for identifying cultural differences.
If the slides are not working, or you prefer them full screen, please try this link.
Notes
Read the next section, Moral Foundations Theory: An Approach to Cultural Variation, first if you are reading this outside the lecture.
According to (Feinberg & Willer, 2013), researchers have found evidence that Moral Foundations Theory is true. What is this evidence?
The first step towards finding evidence is to operationalise the theory. To this end, Atari et al. (2023) developed a Moral Foundations Questionnaire (called MFQ-2). (This is the successor to Haidt & Graham (2007)’s original Moral Foundations Questionnaire, which can be found in Graham et al. (2011). For reasons we’ll get into in below, the first questionnaire did not entirely succeed.)
For each foundation, there are a number of questions. Here is an illustration of some of them:
‘For each of the statements below, please indicate how well each statement describes you or your opinions. Response options: Does not describe me at all (1); slightly describes me (2); moderately describes me (3); describes me fairly well (4); and describes me extremely well (5).’
‘We should all care for people who are in emotional pain.’
‘I think the human body should be treated like a temple, housing something sacred within.’
‘It makes me happy when people are recognized on their merits.’
‘Everyone should feel proud when a person in their community wins in an international competition.’
‘I believe that one of the most important values to teach children is to have respect for authority.’
You can see the full questionnaire in Atari et al. (2023).
If there are (at least) six moral foundations, and if the various questions represent those foundations, then we can predict that the questionnaire will exhibit:
- internal validity (roughly, are the patterns in subjects’ answers consistent with the theory that they are answering on the basis of five foundations?[1]);
- test-retest reliability (are individuals likely to give the same answers at different times); and
- predictive power (roughly, are subjects’ answers on other questionnaires correlated with the conceptually related foundations?).
- measurement invariance (roughly, if you compare two groups’ answers, do differences in their answers indicate meaningful differences between the groups rather than merely differences in the way they interpret the questions)
Atari et al. (2023) give evidence of internal validity for MFQ-2, which is further supported by Dogruyol et al. (2024). Atari et al. (2023) also offer good evidence of predictive power for MFQ-2. In Study 2, these researchers also demonstrate that the questionnaire (MFQ-2) does exhibit scalar invariance for five of the six foundations—the exception is purity (p. 1167). This indicates that the questionnaire can be used to compare the mean strengths of emphasis on foundations between different populations.
Because MFQ-2 is relatively new, there is limited research using it. For now we should be cautious in accepting the results on the few studies using it. (Note that the original moral foundations researchers would disagree: they regard the )
At this point you should understand (1) how research based on a questionnaire could provide a range of evidence in support of predictions generated by Moral Foundations Theory; and (2) have a preliminary understanding of the evidence obtained using MFQ-2.
A Complication: MFQ-1 vs MFQ-2
Earlier (pre-2023) research on Moral Foundations Theory used Haidt & Graham (2007)’s original Moral Foundations Questionnaire, which is now called MFQ-1. (You can find MFQ-1 in Graham et al. (2011).)
Although very widely used (even in some work published after 2023), MFQ-1 does not appear to be a reliable too.
One problem is internal validity. Although MFQ-1 did pass tests of internal validity in various countries (Graham et al., 2011; Yilmaz, Harma, Bahçekapili, & Cesur, 2016), there were several exceptions. Iurino & Saucier (2020) collected new samples across 27 countries but ‘we were not able to replicate Graham et al.’s (2011) results indicating that a five-factor model is a suitable approach to modelling the moral foundations’ (p. 6). Relatedly, Harper & Rhodes (2021) failed to find the five factor structure in a sample from the UK.
The second is a failure to demonstrate measurement invariance. Without measurment invariance, we are not justified in using a questionnaire to compare two groups. We are particularly interested in one kind of measurement invariance, scalar invariance, as this would justify using the Moral Foundations Questionnaire to compare mean scores on a foundation.[2] That is, it would justify us in drawing conclusions like ‘conservatives put more weight on purity than liberals’.[3] Attempts to establish the scalar invariance of MFQ-1 have been unsuccessful (Davis et al., 2016; Doğruyol, Alper, & Yilmaz, 2019; Davis, Dooley, Hook, Choe, & McElroy, 2017; Iurino & Saucier, 2020, p. Table 4). One illustration of this is a failed attempt to compare US and Iranian participants:
‘Iranians and Americans do not interpret MFQ items in nearly similar ways, [...] means cannot be meaningfully compared.’ (Atari, Graham, & Dehghani, 2020, p. 373)
Failure of the original Moral Foundations Questionnaire to exhibit scalar invariance may be due in part to lack of diversity in the sample used to develop it:
‘Items of the MFQ [Moral Foundations Questionnaire] were refined on the basis of a sample with participants from a variety of countries, but the sample was predominately White (i.e., 87%). Furthermore, the sample involved people who visited the team’s website, which inevitably involves some selection bias, potentially associated with ideological background’ (Davis et al., 2017, p. 128; compare Kivikangas, Fernández-Castilla, Järvelä, Ravaja, & Lönnqvist, 2021, p. 84).
In the lecture we also consider how some items on MFQ-1 are likely to provide different reactions from different groups for reasons that have nothing to do with their ethical attitudes.
Overall, we should be extremely cautious about drawing conclusions about cultural variation from results obtained with the original Moral Foundations Questionnaire (MFQ-1) alone. Is there reason to trust them? If so, what is the reason?
How MFQ-2 Differs from MFQ-1
The new questionnaire is based on six rather than five foundations. The change is essentially to split what was previously Fairness into two things: Equality (which concerns equal treatment) and Proportionality (which concerns being rewarded in proportion to one’s contribution).[4]
Glossary
References
Endnotes
For a clear, nontechnical intro to confirmatory factor analysis see Gregorich (2006). (Note that you are not expected to understand this.) ↩︎
See Lee (2018): ‘Ascertaining scalar invariance allows you to substantiate multi-group comparisons of factor means (e.g., t-tests or ANOVA), and you can be confident that any statistically significant differences in group means are not due to differences in scale properties.’ ↩︎
See Iurino & Saucier (2020, p. 2): ‘A finding of measurement invariance would provide more confidence that use of the MFQ across cultures can shed light on meaningful differences between cultures rather than merely reflecting the measurement properties of the MFQ.’ ↩︎
The new foundations are called Care, Equality, Proportionality, Loyalty, Authority and Purity (Atari et al., 2023, p. table 2, 1161). These researchers cite Meindl, Iyer, & Graham (2019) as justifying the distinction between equality and proportionality. I am not confident I understand how these are distinct. ↩︎