Link Search Menu Expand Document

Which Moral Scenarios Are Unfamiliar?

There are at least two reasons to suspect that the moral scenarios philosophers typically consider are unfamiliar situations.

This recording is also available on stream (no ads; search enabled). Or you can view just the slides (no audio or video). You should not watch the recording this year, it’s all happening live (advice).

If the video isn’t working you could also watch it on youtube. Or you can view just the slides (no audio or video). You should not watch the recording this year, it’s all happening live (advice).

If the slides are not working, or you prefer them full screen, please try this link.

The recording is available on stream and youtube.


Do we have reason to suspect that the moral scenarios philosophers typically consider are unfamiliar situations?

Reason 1: Philosophical Methods

Even on the view most charitable to the argument’s likely opponents (e.g. Railton, 2014), some moral scenarios will be bizarre enough to count as unfamiliar. Although we do not know which these are (as far as I can tell), philosophers’ interest in fine distinctions and edge cases increases the probability of hitting on unfamiliar situations.[1]

Reason 2: Signature Limits

We also know that there fast processes in other domains exhibit a range of signature limits even in adults and are unaffected by expertise, including:

This is no accident. Any broadly inferential process must make a trade-off between speed and accuracy. As more than a century of cognitive science has found (Henmon, 1911; Link & Tindall, 1971; Heitz, 2014).[2]

Consequently, even for experts with much experience, some quite ordinary-seeming scenarios may be turn out to be unfamiliar. We should therefore be suspicious that at least some moral scenarios philosophers consider will turn out to involve signature limits, which would make them unfamiliar.

Going Deeper

Greene (2017) takes up the topic in detail.

Which comparison: Linguistic or Physical?

The slides and recording use a comparison between ethical and physical cognition. This assists in arriving at the two reasons above.

How would things look if instead we compared ethical to linguistic cognition? As we saw in Cognitive Miracles: When Are Fast Processes Unreliable?, on any standard view it is not possible that a linguistic theory could discover that fast processes embody a systematically distorted view of the linguistic. One consequence is that is no easy way to make sense of the idea that there could be unfamiliar problems in the linguistic domain. So accepting the comparison with linguistic cognition might well lead us to reject this premise of the argument and deny that we would have any reason to suspect that the moral scenarios philosophers typically consider are unfamiliar situations.

Would accepting the comparison with linguistic cognition allow us to defend some proposed methods for gaining ethical knowledge such as Foot’s, Kamm’s or Thomson’s Other Method of Trolley Cases?

While accepting the comparison with linguistic cognition would mean that philosophers can avoid the conclusion of the loose reconstruction of Greene (2014)’s argument, it leads to a distinct, no less pressing challenge.

In linguistics, there is growing awareness that it is a mistake to rely on expert judgements (see, for example Wasow & Arnold, 2005; Gibson & Fedorenko, 2010; and Dąbrowska, 2010). Understanding how fast linguistic processes work requires careful experiment, not introspective guesswork. Similar considerations apply in the case of ethics.

Therefore, even if we accept the comparison with linguistic cognition, we can still reach a conclusion that is close to, and has much the same implications for ethics as, the conclusion of the loose reconstruction of Greene (2014)’s argument:

[alternative conclusion] Premises about judgements about particular moral scenarios need to be supported by carefully controlled experiments if they are to be used in ethical arguments where the aim is to establish knowledge of their conclusions.

Ask a Question

Your question will normally be answered in the question session of the next lecture.

More information about asking questions.


automatic : As we use the term, a process is automatic just if whether or not it occurs is to a significant extent independent of your current task, motivations and intentions. To say that mindreading is automatic is to say that it involves only automatic processes. The term `automatic' has been used in a variety of ways by other authors: see Moors (2014, p. 22) for a one-page overview, Moors & De Houwer (2006) for a detailed theoretical review, or Bargh (1992) for a classic and very readable introduction
cognitively efficient : A process is cognitively efficient to the degree that it does not consume working memory and other scarce cognitive resources.
fast : A fast process is one that is to to some interesting degree cognitively efficient (and therefore likely also some interesting degree automatic). These processes are also sometimes characterised as able to yield rapid responses.
Since automaticity and cognitive efficiency are matters of degree, it is only strictly correct to identify some processes as faster than others.
The fast-slow distinction has been variously characterised in ways that do not entirely overlap (even individual author have offered differing characterisations at different times; e.g. Kahneman, 2013; Morewedge & Kahneman, 2010; Kahneman & Klein, 2009; Kahneman, 2002): as its advocates stress, it is a rough-and-ready tool rather than an element in a rigorous theory.
signature limit : A signature limit of a system is a pattern of behaviour the system exhibits which is both defective given what the system is for and peculiar to that system. A signature limit of a model is a set of predictions derivable from the model which are incorrect, and which are not predictions of other models under consideration.
unfamiliar problem : An unfamiliar problem (or situation) is one ‘with which we have inadequate evolutionary, cultural, or personal experience’ (Greene, 2014, p. 714).


Bargh, J. A. (1992). The Ecology of Automaticity: Toward Establishing the Conditions Needed to Produce Automatic Processing Effects. The American Journal of Psychology, 105(2), 181–199.
Dąbrowska, E. (2010). Naive v. Expert intuitions: An empirical study of acceptability judgments. The Linguistic Review, 27(1), 1–23.
Feigenson, L., Dehaene, S., & Spelke, E. S. (2004). Core systems of number. Trends in Cognitive Sciences, 8(7), 307–314.
Gibson, E., & Fedorenko, E. (2010). Weak quantitative standards in linguistics research. Trends in Cognitive Sciences, 14(6), 233–234.
Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726.
Greene, J. D. (2017). The rat-a-gorical imperative: Moral intuition and the limits of affective learning. Cognition, 167, 66–77.
Heitz, R. P. (2014). The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Decision Neuroscience, 8, 150.
Henmon, V. A. C. (1911). The relation of the time of a judgment to its accuracy. Psychological Review, 18(3), 186–201.
Hogarth, R. M. (2010). Intuition: A Challenge for Psychological Research on Decision Making. Psychological Inquiry, 21(4), 338–353.
Kahneman, D. (2002). Maps of bounded rationality: A perspective on intuitive judgment and choice. In T. Frangsmyr (Ed.), Le prix nobel, ed. T. Frangsmyr, 416–499. (Vol. 8, pp. 351–401). Stockholm, Sweden: Nobel Foundation.
Kahneman, D. (2013). Thinking, fast and slow. New York: Farrar, Straus; Giroux.
Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515–526.
Kozhevnikov, M., & Hegarty, M. (2001). Impetus beliefs as default heuristics: Dissociation between explicit and implicit knowledge about motion. Psychonomic Bulletin & Review, 8(3), 439–453.
Link, S. W., & Tindall, A. D. (1971). Speed and accuracy in comparative judgments of line length. Perception & Psychophysics, 9(3), 284–288.
Low, J., Apperly, I. A., Butterfill, S. A., & Rakoczy, H. (2016). Cognitive Architecture of Belief Reasoning in Children and Adults: A Primer on the Two-Systems Account. Child Development Perspectives, 10(3), 184–189.
Moletti, G. (2000). The Unfinished Mechanics of Giuseppe Moletti: An Edition and English Translation of His Dialogue on Mechanics, 1576, translated by W. R. Laird. Toronto: University of Toronto Press.
Moors, A. (2014). Examining the mapping problem in dual process models. In Dual process theories of the social mind (pp. 20–34). Guilford.
Moors, A., & De Houwer, J. (2006). Automaticity: A Theoretical and Conceptual Analysis. Psychological Bulletin, 132(2), 297–326.
Morewedge, C. K., & Kahneman, D. (2010). Associative processes in intuitive judgment. Trends in Cognitive Sciences, 14(10), 435–440.
Railton, P. (2014). The Affective Dog and Its Rational Tale: Intuition and Attunement. Ethics, 124(4), 813–859.
Wasow, T., & Arnold, J. (2005). Intuitions in linguistic argumentation. Lingua, 115(11), 1481–1496.


  1. This may be a virtue of philosophical practice. Comparison with the physical case indicates that considering what turn out to be unfamiliar situations may be important for making discoveries (at least, Moletti (2000, p. 147) seems justifiably excited about vertical motion). ↩︎

  2. To illustrate, suppose you were required to judge which of two only very slightly different lines was longer. All other things being equal, making a faster judgement would involve being less accurate, and being more accurate would require making a slower judgement. (This idea is due to Henmon (1911), who has been influential although he didn't actually get to manipulate speed experimentally because of ‘a change of work’ (p. 195).) ↩︎