Link Search Menu Expand Document

A Linguistic Analogy

What do humans compute that enables them to track moral attributes? In this section we introduce a second hypothesis which answers this question, one based on an analogy between ethical and linguistic abilities. The hypothesis is due to Mikhail (2014). Considering the hypothesis also provides an argument for the view that moral attributes are accessible.

This recording is also available on stream (no ads; search enabled). Or you can view just the slides (no audio or video). You should not watch the recording this year, it’s all happening live (advice).

If the video isn’t working you could also watch it on youtube. Or you can view just the slides (no audio or video). You should not watch the recording this year, it’s all happening live (advice).

If the slides are not working, or you prefer them full screen, please try this link.

The recording is available on stream and youtube.


Several researchers have developed theories about humans’ ethical abilities based on analogies with their linguistic abilities (including Mikhail, 2007, Dwyer, 2009 and Roedder & Harman, 2010).

Consider two questions of the same form but about different domains:

  1. What do humans compute that enables them to track moral attributes?
  2. What do humans compute that enables them to track syntactic[1] attributes?

A standard answer to the second question, (2), is: they compute the syntactic attributes themselves. Of course, humans are all, or mostly, unaware of computing syntactic attributes. But they do in fact do this, probably thanks to a language module.

Mikhail (2014) offers some considerations which can be used to argue for a parallel view about moral attributes:

Humans track moral attributes by computing moral attributes in much the way that they track linguistic attributes (which perhaps involves a language module).

This is an alternative to Sinnott-Armstrong, Young, & Cushman (2010, p. §2.1)’s hypothesis about the Affect Heuristic.

What Is Mikhail’s (Best) Argument?

  1. ‘adequately specifying the kinds of harm that humans intuitively grasp requires a technical legal vocabulary’ (Mikhail, 2007, p. 146)


  2. The abilities underpinning unreflective ethical judgements must involve analysis in accordance with rules.


  3. Humans do not know the rules.


  4. The analysis is achieved by a modular process.

Mikhail’s argument for the first premise that ‘adequately specifying the kinds of harm that humans intuitively grasp requires a technical legal vocabulary’ (Mikhail, 2007, p. 146) depends on an analysis of pairs of dilemmas like the Trolley/Transplant pair presented in the recording. Many subjects make apparently inconsistent judgements when presented with such pairs of dilemmas; they appear to say that killing one to save five people is both permitted and impermissible. Mikhail argues that the inconsistency is merely apparent. For there is a morally significant difference between the dilemmas: one (Transplant) involves purposive battery while the other (Trolley) does not. This supports the idea that the pattern of judgements, far from being inconsistent, reflects the operation of principles and the identification of structure in the scenarios.[2]

An Objection to Mikhail

Moral judgements are subject to order effects: which in a pair of dilemmas is presented first sometimes influences subjects’ responses to the dilemmas (Petrinovich & O’Neill, 1996, p. Study 2; Wiegmann, Okan, & Nagel, 2012). This is true even for professional philosophers (Schwitzgebel & Cushman, 2015). No such effect is predicted by Mikhail’s hypothesis that subjects’ moral intuitions are a consequence of their correctly identifying structure and applying principles consistently.

Mikhail’s hypothesis therefore at least requires qualification. This means his argument does not provide sufficient grounds to conclude that humans track moral attributes by computing moral attributes.

What Should We Conclude?

None of the arguments we have yet considered are sufficient to establish the view that moral intuitions are a consequence of a moral module. So while the idea that there is an analogy between ethical and linguistic abilities remains intriguing, we are not in a position to accept or reject it without further arguments or discoveries.

Appendix: What Are Modules?

They are ‘the psychological systems whose operations present the world to thought’; they ‘constitute a natural kind’; and there is ‘a cluster of properties that they have in common’ (Fodor, 1983, p. 101):

  • domain specificity (modules deal with ‘eccentric’ bodies of knowledge)
  • limited accessibility (representations in modules are not usually inferentially integrated with knowledge)
  • informational encapsulation (modules are unaffected by general knowledge or representations in other modules)
  • innateness (roughly, the information and operations of a module not straightforwardly consequences of learning; but see Samuels (2004)).

Ask a Question

Your question will normally be answered in the question session of the next lecture.

More information about asking questions.


Affect Heuristic : In the context of moral psychology, the Affect Heuristic is this principle: ‘if thinking about an act [...] makes you feel bad [...], then judge that it is morally wrong’ (Sinnott-Armstrong et al., 2010). These authors hypothesise that the Affect Heuristic explains moral intuitions.
A different (but related) Affect Heurstic has also be postulated to explain how people make judgements about risky things are: The more dread you feel when imagining an event, the more risky you should judge it is (see Pachur, Hertwig, & Steinmann, 2012, which is discussed in The Affect Heuristic and Risk: A Case Study).
domain specific : A process is domain specific to the extent that there are limits on the range of functions its outputs typically serve. Domain-specific processes are commonly contrasted with general-purpose processes.
inaccessible : An attribute is inaccessible in a context just if it is difficult or impossible, in that context, to discern substantive truths about that attribute. For example, in ordinary life and for most people the attribute being further from Kilmery (in Wales) than Steve’s brother Matt is would be inaccessible.
See Kahneman & Frederick (2005, p. 271): ‘We adopt the term accessibility to refer to the ease (or effort) with which particular mental contents come to mind.’
informational encapsulation : One process is informationally encapsulated from some other processes to the extent that there are limits on the one process’ ability to consume information available to the other processes. (See Fodor, 1983; Clarke, 2020, p. 5ff.)
innate : Not learned. While everyone disagrees about what innateness is (see Samuels, 2004), on this course a cognitive ability is innate just if its developmental emergence is not a direct consequence of data-driven learning.
module : A module is standardly characterised as a cognitive system which exhibits, to a significant degree, a set of features including domain specificity, limited accessibility, and informational encapsulation. Contemporary interest in modularity stems from Fodor (1983). Note that there are now a wide range of incompatible views on what modules are and little agreement among researchers on what modules are or even which features are characteristic of them.
moral intuition : According to this lecturer, a person’s intuitions are the claims they take to be true independently of whether those claims are justified inferentially. And a person’s moral intuitions are simply those of their intuitions that concern ethical matters.
According to Sinnott-Armstrong et al. (2010, p. 256), moral intuitions are ‘strong, stable, immediate moral beliefs.’
tracking an attribute : For a process to track an attribute is for the presence or absence of the attribute to make a difference to how the process unfolds, where this is not an accident. (And for a system or device to track an attribute is for some process in that system or device to track it.)
Tracking an attribute is contrasted with computing it. Unlike tracking, computing typically requires that the attribute be represented. (The distinction between tracking and computing is a topic of Moral Intuitions and an Affect Heuristic.)
Transplant : A dilemma. Five people are going to die but you can save them all by cutting up one healthy person and distributing her organs. Is it ok to cut her up?
Trolley : A dilemma; also known as Switch. A runaway trolley is about to run over and kill five people. You can hit a switch that will divert the trolley onto a different set of tracks where it will kill only one. Is it okay to hit the switch?


Clarke, S. (2020). Cognitive penetration and informational encapsulation: Have we been failing the module? Philosophical Studies.
Dwyer, S. (2009). Moral Dumbfounding and the Linguistic Analogy: Methodological Implications for the Study of Moral Judgment. Mind & Language, 24(3), 274–296.
Fodor, J. (1983). The modularity of mind: An essay on faculty psychology. Cambridge, Mass ; London: MIT Press.
Haidt, J. (2008). Morality. Perspectives on Psychological Science, 3(1), 65–72.
Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. In K. J. Holyoak & R. G. Morrison (Eds.), The cambridge handbook of thinking and reasoning (pp. 267–293). Cambridge: Cambridge University Press.
Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in Cognitive Sciences, 11(4), 143–152.
Mikhail, J. (2014). Any Animal Whatever? Harmful Battery and Its Elements as Building Blocks of Moral Cognition. Ethics, 124(4), 750–786.
Pachur, T., Hertwig, R., & Steinmann, F. (2012). How Do People Judge Risks: Availability Heuristic, Affect Heuristic, or Both? Journal of Experimental Psychology: Applied, 18(3), 314–330.
Petrinovich, L., & O’Neill, P. (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17(3), 145–171.
Roedder, E., & Harman, G. (2010). Linguistics and moral theory. In J. M. Doris, M. P. R. Group, & others (Eds.), The moral psychology handbook (pp. 273–296). Oxford: OUP.
Samuels, R. (2004). Innateness in cognitive science. Trends in Cognitive Sciences, 8(3), 136–141.
Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137.
Sinnott-Armstrong, W., Young, L., & Cushman, F. (2010). Moral intuitions. In J. M. Doris, M. P. R. Group, & others (Eds.), The moral psychology handbook (pp. 246–272). Oxford: OUP.
Wiegmann, A., Okan, Y., & Nagel, J. (2012). Order effects in moral judgment. Philosophical Psychology, 25(6), 813–836.


  1. As an example of a syntactic attribute, consider being a (grammatical) sentence. For example, the sequence of words ‘He is a waffling fatberg of lies’ is a sentence whereas the sequence of words ‘A waffling fatberg lies of he is’ is not a sentence. These are syntactic attributes of the two sequences of words. ↩︎

  2. Mikhail (2014) provides more detail on the argument for this premise. (I also provide some detail in the recording.) ↩︎