Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Moral Psychology \\ Lecture 03}
 
\maketitle
 

Lecture 03:

Moral Psychology

\def \ititle {Lecture 03}
\def \isubtitle {Moral Psychology}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
Default intepretation of question for essay 1: which if any of these three models is best supported by evidence.
As we saw last week, the puzzles arise if we ask,

‘Does emotion influence moral judgment
or merely motivate morally relevant action?’

Proponents of emotion’s influence face a puzzle about structure ...

emotion proponents

[structure puzzle] Why do patterns in humans’ intuitive judgements reflect legal principles they are unaware of?

linguistic analogy fans

[emotion puzzle] Why do feelings of disgust influence unreflective moral judgements?

And why do we feel disgust in response to moral transgressions?

What evidence might bear on this question.

What evidence might indicate that humans have a language ethics module?

dumbfounding

resistance to revisability

structure

 

Moral Dumbfounding

 
\section{Moral Dumbfounding}
 
\section{Moral Dumbfounding}

What is moral dumbfounding?

Moral dumbfounding is ‘the stubborn and puzzled maintenance of a judgment without supporting reasons’ \citep[p.~1]{haidt:2000_moral}.
‘Moral dumbfounding occurs when you make an ethical judgement but either cannot provide reasons or provide reasons that are ‘only weakly associated’ with your judgement’ \citep{dwyer:2009_moral}.
To understand this phenomenon, we need to review the experiment that gave rise to the term.
To understand moral dumbfounding, we need to review the experiment that gave rise to the term.

Haidt et al (2000; unpublished!)’s tasks

NB: I’m delibertately not mentioning the Heinz dilemma at this stage, for drama.

Control: ‘Heinz dilemma (should Heinz steal a drug to save his dying wife?)’

morally provocative but ‘harmless’: Incest; Cannibal

nonmorally provocative but ‘harmless’: Roach; Soul

‘(Incest) depicts consensual incest between two adult siblings, and [...] (Cannibal) depicts a woman cooking and eating a piece of flesh from a human cadaver donated for research to the medical school pathology lab at which she works. These stories were chosen because they were expected to cause the participant to quickly and intuitively "see-that" the act described was morally wrong. Yet since the stories were carefully written to be harmless, the participant would be prevented from finding the usual “reasoning-why” about harm that participants in Western cultures commonly use to justify moral condemnation‘ \citep{haidt:2000_moral}.
‘In addition we used two "non-moral intuition" tasks: Roach and Soul. [...] In [Roach] the participant is asked to drink from a glass of juice both before and after a sterilized cockroach has been dipped into it. In the Soul task the participant is offered two dollars to sign a piece of paper and then rip it up; on the paper are the words "I, (participant's name), hereby sell my soul, after my death, to Scott Murphy [the experimenter], for the sum of two dollars." At the bottom of the page a note was printed that said: "this is not a legal or binding contract"’ \citep[p.~6]{haidt:2000_moral}

Method: ask whether wrong; counter argue; questionnaire

Method: (1) ask whether the act was wrong (or whether the participant would perform the action). (2) record the answer, and any argument given. (3) experimenter argue against the participant’s position. (4) questionnaire after each task (‘The questionnaire asked the participant to respond on a Likert scale as to her level of confusion, irritation, and confidence in her judgment, and to what extent her judgment was based on reasoning or on a "gut feeling."’).
Do try this at home!
But is Isabel dumbfounded? Maybe briefly. But is that significant? Hard to measure ...
To understand moral dumbfounding, we need to review the experiment that gave rise to the term.

Haidt et al (2000; unpublished!)’s tasks

NB: I’m delibertately not mentioning the Heinz dilemma at this stage, for drama.

Control: ‘Heinz dilemma (should Heinz steal a drug to save his dying wife?)’

morally provocative but ‘harmless’: Incest; Cannibal

nonmorally provocative but ‘harmless’: Roach; Soul

Method: ask whether wrong; counter argue; questionnaire

This condition is often forgotton but it is important for two reasons.
First, a natural assumption is that we should be able to test hypotheses about relative levels of dumbfounding rather than about absolute levels. Second, ... we’ll see late in covering Dwyer‘s argument
On the importance of the control (Heinz) task: ‘Planned contrasts were performed between the Heinz task and each of the other four tasks, because we predicted that the Heinz task would be unique in encouraging analytical reasoning’ \citep[p.~8]{haidt:2000_moral}

Results

NB: unpublished data

‘it often happened that participants made “unsupported declarations”, e.g., “It’s just wrong to do that!” or “That’s terrible!”

Note the comparison with the control!

They made the fewest such declarations in Heinz, and they made significantly more such declarations in the Incest story.’

Results ctd

NB: unpublished data

Informal observation: ‘participants often directly stated that they were dumbfounded, i.e., they made a statement to the effect that they thought an action was wrong but they could not find the words to explain themselves’ (p. 9)

‘Participants made the fewest such statements in Heinz (only 2 such statements, from 2 participants), while they made significantly more such statements in the Incest (38 statements from 23 different participants), Cannibalism (24 from 11), and Soul stories (22 from 13).’

Importance of the method: this is a control

Study 2 (not reported!):

Cognitive load increased the level of moral dumbfounding without changing subjects’ judgments.

‘In Study 2 [which is not reported in the draft] we repeated the basic design while exposing half of the subjects to a cognitive load—an attention task that took up some of their conscious mental work space—and found that this load increased the level of moral dumbfounding without changing subjects’ judgments or their level of persuadability.’ \citep[p.~198]{haidt:2008_social}.
This will be important later when we are thinking about dual process theories of moral abilities.

replication / extension / review?

Royzman et al, 2015 : more recent study doubts that dumbfounding occurs \citep{royzman:2015_curious}. This study involves three experiments. They partially replicated Haidt et al, 2000, then go on to test whether subjects really believe that the incest is harmless.
Note that, unlike Haidt et al, 2000, these researchers did not use the comparison with Heinz!

‘a definitionally pristine bout of MD is likely to be a extraordinarily rare find, one featuring a person who doggedly and decisively condemns the very same act that she has no prior normative reasons to dislike’

\citep[p.~311]{royzman:2015_curious}

Royzman et al, 2015 p. 311

‘3 of [...] 14 individuals [without supporting reasons] disapproved of the siblings having sex and only 1 of 3 (1.9%) maintained his disapproval in the “stubborn and puzzled” manner.’

\citep[p.~309]{royzman:2015_curious}

Royzman et al, 2015 p. 309

Warning: Note the absent comparison with the Heinz dilemma.

summary: moral dumbfounding

we know the definition;

some evidence --- weak, but probably occurs.

‘Moral dumbfounding occurs when you make an ethical judgement but either cannot provide reasons or provide reasons that are ‘only weakly associated’ with your judgement’ \citep{dwyer:2009_moral}.

why is this relevant?

What evidence might bear on this question.

What evidence might indicate that humans have a language ethics module?

dumbfounding

resistance to revisability

structure

How would the existence of dumbfounding support the view that there is a moral module?

I.e. how does dumbfounding support the linguistic analogy?
 

A Language Analogy: Dwyer’s Argument

 
\section{A Language Analogy: Dwyer’s Argument}
 
\section{A Language Analogy: Dwyer’s Argument}

‘linguistics--a domain in which ordinary human beings are also famously dumbfounded.’

\citep[p.~279]{dwyer:2009_moral}

Dwyer 2009, p. 279

‘Moral Dumbfounding suggests two desiderata for an adequate account of moral judgment; namely, it: \begin{quote} (a) must not entail what is patently false, namely, that such judgments are the conclusions of explicitly represented syllogisms, one or more premises of which are moral principles, that ordinary folk can articulate, and (b) must accommodate subjects’ grasp of the structure of the scenes they evaluate.’ \end{quote} ‘The Linguistic Analogy, which [... holds that [ethical] judgments are reflective of the structure of the Moral Faculty, satisfies these desiderata’ \citep[p.~294]{dwyer:2009_moral}.

‘Moral Dumbfounding suggests two desiderata for an adequate account of moral judgment; namely, it:

(a) must not entail what is patently false, namely, that such judgments are the conclusions of explicitly represented syllogisms, one or more premises of which are moral principles, that ordinary folk can articulate, and

(b) must accommodate subjects’ grasp of the structure of the scenes they evaluate.’

‘The Linguistic Analogy, which [... holds that [ethical] judgments are reflective of the structure of the Moral Faculty, satisfies these desiderata.’

Dwyer 2009, p. 294

This seems not to follow from dumbfounding at all, but from reflection on patterns of judgement (see the discussion of Mikhail from the last lecture).
go back to the evidence on dumbfounding ...

How well does the evidence support Dwyer’s position?

(Never trust a philospoher!)

Complication: Dwyer cites ‘Haidt’s (2001) study’ but this is actually a review paper.

Dwyer probably intends to refer to Haidt et al, 2000.

From the abstract:

‘It was hypothesized that participants’ judgments

would be highly consistent with their reasoning on the moral reasoning dilemma, but that

judgment would separate from reason and follow intuition in the other four tasks.’

So far this is consistent with Dwyer’s view, but ...
Why other?
This part does not appear to be consistent with her view at all. It implies we are back to the dual process view after all.
NB: Dwyer is explicitly attacking the dual process view!
Key Disanalogy with language: ethical reasoning seems important for exercising some ethical abilities
[topic: moral reasoning. Hindriks 2015?]

‘Moral Dumbfounding suggests two desiderata for an adequate account of moral judgment; namely, it:

(a) must not entail what is patently false, namely, that such judgments are the conclusions of explicitly represented syllogisms, one or more premises of which are moral principles, that ordinary folk can articulate, and

(b) must accommodate subjects’ grasp of the structure of the scenes they evaluate.’

‘The Linguistic Analogy, which [... holds that [ethical] judgments are reflective of the structure of the Moral Faculty, satisfies these desiderata.’

Dwyer 2009, p. 294

?

Am I reading too much into the Heinz findings?
What is the role of reasoning in moral judgement? Some appear to have suggested that moral reasoning merely serves to confirm prior intuitions, special cases aside \citep{greene:2007_secret,haidt:2001_emotional}.\footnote{Although in fact Haidt’s view is more interesting. Compare \citet[p.~181]{haidt:2008_social} ‘Moral discussion is a kind of distributed reasoning, and moral claims and justifications have important effects on individuals and societies’; yet they go on to write that ‘moral reasoning is an effortful process (as opposed to an automatic process), usually engaged in after a moral judgment is made, in which a person searches for arguments that will support an already-made judgment’ \citet[p.~189]{haidt:2008_social}.} Opposing these views, \citet{hindriks:2015_how} argues that in ordinary cases of moral disengagement, moral reasoning provides anticipatory rationalization.

post-hoc rationalization

Moral reasoning merely serves to confirm prior intuitions, in nearly all cases (Haidt; Greene)

Some theorists have proposed that oral reasoning merely serves to confirm prior intuitions, in nearly all cases (Haidt; Greene). If they are right, then Dwyer’s suggestion that moral judgements are not consequences of explicit reasoning involving known principles appears sound. However ...

ante hoc reasoning

In ordinary cases of moral disengagement, moral reasoning provides anticipatory rationalization (Hendriks, 2015)

‘Moral disengagement occurs in situations in which someone is tempted to flout his own moral standards, and thereby to frustrate his desire to maintain self-consistency’ \citep[p.~243]{hindriks:2015_how}.

moral dumbfounding does not decide this issue

Importantly for us, moral dumbfounding does not decide this issue. So Dwyer may be right to hold that moral judgements are not consequences of explicit reasoning involving known principles appears sound. But she is wrong to think this is a consequence of moral dumbfounding.

‘Moral Dumbfounding suggests two desiderata for an adequate account of moral judgment; namely, it:

(a) must not entail what is patently false, namely, that such judgments are the conclusions of explicitly represented syllogisms, one or more premises of which are moral principles, that ordinary folk can articulate, and

(b) must accommodate subjects’ grasp of the structure of the scenes they evaluate.’

‘The Linguistic Analogy, which [... holds that [ethical] judgments are reflective of the structure of the Moral Faculty, satisfies these desiderata.’

Dwyer 2009, p. 294

?

The question was whether I am reading too much into Heinz.
Moral dumbfounding does not suggest this!
So where does this leave the linguistic analogy?

What does moral dumbfounding actually show?

My view.

Moral dumbfounding shows that some ethical judgements are not consequences of reasoning from known principles

Other phenomena (e.g. moral disengagement) indicate that some ethical judgements are consequences of reasoning from known principles

‘Moral Dumbfounding suggests two desiderata for an adequate account of moral judgment; namely, it:

(a) must not entail what is patently false, namely, that such judgments are the conclusions of explicitly represented syllogisms, one or more premises of which are moral principles, that ordinary folk can articulate, and

(b) must accommodate subjects’ grasp of the structure of the scenes they evaluate.’

‘The Linguistic Analogy, which [... holds that [ethical] judgments are reflective of the structure of the Moral Faculty, satisfies these desiderata.’

Dwyer 2009, p. 294

I was asking where does this leave the linguistic analogy? My sense is that it leaves the Linguistic Analogy in a bad place. The linguistic analogy, as Dwyer construes it, seems in conflict with the point that some ethical judgements are consequences of reasoning from known principles.
How do I know? Because Dwyer says is ‘satisfies the desiderata’ of not entailing that ethical judgements are consequences of known principles. (Admittedly there is a quantifier ambiguity in this statement.)
quantifier ambiguity: All? or Any?
What evidence might bear on this question.

What evidence might indicate that humans have a language ethics module?

dumbfounding

resistance to revisability

structure

People say this is evidence for the Linguistic Analogy, the idea that there is a moral module. But dumbfounding involves contrasting Heniz with Incest/Dog, and this contrast seems to me to actually be a reason against accepting the Linguistic Analogy.
Let me say that again. Properly understood (in terms of the source everyone cites, Haidt et al 2000 unpublished), moral dumbfounding provides reason against accepting the Linguistic Analogy.
There’s a nice summary of further issues concerning the prospects for a linguistic analogy to consider in your handout.
Further reading (not covered in lectures): ‘the issues [a linguictic] analogy raises for moral theory are (1) whether the useful unit of analysis for moral theory is an individual’s I-grammar, in contrast, for example, with the moral conventions of a group; (2) whether and how such a moral grammar might associate structural descriptions of actions, situations, etc. with normative assessments; (3) whether and how the rules of such a moral grammar might involve recursive embedding of normative assessments; and (4) whether it is useful to distinguish moral ‘competence’ from moral ‘performance,’ using these terms in the technical senses employed in linguistic theory’ \citep[p.~283]{roedder:2010_linguistics}.
\citet{dupoux:2007_universal} provide further objections to the Linguistic Analogy. \citet{dwyer:2008_dupoux} reply, and \citet{dupoux:2008_response} reply to the reply.
The important thing for me isn’t whether you find the argument compelling or not. There’s surely much more to say. It’s that the motivating for it gives us a good question, a puzzle even.

puzzle

Why are ethical judgements sometimes, but not always, a consequence of reasoning from known principles?

interim conclusion

Q: What do adult humans compute that enables their moral intuitions to track moral attributes (such as wrongness)?

Sinnott-Armstrong et al (2010): their emotional responses

Mikhail (2007; 2014): moral attributes themselves

Each view is a response to a different puzzle.

As we saw last week, the puzzles arise if we ask,

‘Does emotion influence moral judgment
or merely motivate morally relevant action?’

Proponents of emotion’s influence face a puzzle about structure ...

emotion proponents

[structure puzzle] Why do patterns in humans’ intuitive judgements reflect legal principles they are unaware of?

linguistic analogy fans

[emotion puzzle] Why do feelings of disgust influence unreflective moral judgements?

And why do we feel disgust in response to moral transgressions?

Q: What do adult humans compute that enables their moral intuitions to track moral attributes (such as wrongness)?

Sinnott-Armstrong et al (2010): their emotional responses

Mikhail (2007; 2014): moral attributes themselves

Each view is a response to a different puzzle.

Neither seems fully able to explain the puzzles

Our task is to develop a theory that can solve the puzzles, is theoretically coherent and empirically motivated, and generates novel testable predictions.

 

Dual Process Theories

 
\section{Dual Process Theories}
 
\section{Dual Process Theories}
Start with a simple causal model.
‘response 1’ is a variable representing which response the subject will give. [Which values it takes will depend on what sort of response it is (e.g. a verbal response, proactive gaze, button press.) We can think of it as taking three values, one for correct belief tracking, one for fact tracking, and one for any other response.]
‘process 1’ and ‘process 2’ are variables which each represent whether a certain kind of ethical process will occur and, if so, what it’s outcome is.
And the arrows show that the probability that response 1 will have a certain value is influenced by the value of the variables process 1 and process 2 (and by other things not included in the model). So it should be possible to intervene on the value of ‘process 1’ in order to bring about a change in the value of ‘response 1’.
[I’ve used thicker and thinner arrows informally to indicate stronger and weaker dependence. Strictly speaking the width has no meaning and this model doesn’t specify exactly how the values of variables are related, only that they are.]

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

Ok, that’s what the theory says. But what does it mean?
Actually we don’t need to consider more than one response for the present since there is no evidence concerning multiple types of response (alas!).
cognitive load study \citep{greene:2008_cognitive}

Answer the dilemma (see handout)

Ask them to read and respond to the dilemma
\subsection{Dilemma}
‘You are part of a group of ecologists who live in a remote stretch of jungle. The entire group, which includes eight children, has been taken hostage by a group of paramilitary terrorists. One of the terrorists takes a liking to you. He informs you that his leader intends to kill you and the rest of the hostages the following morning.
‘He is willing to help you and the children escape, but as an act of good faith he wants you to kill one of your fellow hostages whom he does not like. If you refuse his offer all the hostages including the children and yourself will die. If you accept his offer then the others will die in the morning but you and the eight children will escape.
‘Would you kill one of your fellow hostages in order to escape from the terrorists and save the lives of the eight children?’ \citep{koenigs:2007_damage}

Terminology

‘consequentialist response’ = yes, kill one of your fellow hostages

[For later: \citet{gawronski:2017_consequences}’s criticism about binary choices not properly relfecting the fully range of possibilities (e.g. because a negative answer might reflect a preference for inaction).]

Additional assumptions

One process makes fewer demands on scarce cognitive resources than the other.

(Terminology: fast vs slow)

The slow process is responsible for consequentialist responses; the fast for other responses.

What are ‘consequentialist responses’? Those responses where a moral judgement that would be correct on a simple consequentialist theory.

Prediction: Increasing cognitive load will selectively slow consequentialist responses

Greene et al 2008, figure 1

time pressure study

Additional assumptions

One process makes fewer demands on scarce cognitive resources than the other.

(Terminology: fast vs slow)

The slow process is responsible for consequentialist responses; the fast for other responses.

Prediction: Limiting the time available to make a decision will reduce consequentialist responses.

time pressure study

Trémolière and Bonnefon, 2014 figure 4

‘The model detected a significant effect of time pressure, p = .03 (see Table 1), suggesting that the slope of utilitarian responses was steeper for participants under time pressure. As is visually clear in Figure 4, participants under time pressure gave less utilitarian responses than control par- ticipants to scenarios featuring low kill–save ratios, but reached the same rates of utilitarian responses for the highest kill–save ratios.’ \citep[p.~927]{tremoliere:2014_efficient}
\textbf{*todo*} [save for later, more drama: [also mention \citep{gawronski:2018_effects} p.~1006 ‘reinterpreation’ and p.~992 descriptive vs mechanistsic]] \citet[p.~669]{gawronski:2017_what} argue for an alternative interpretation: The central findings of \citet{tremoliere:2014_efficient} ‘show that outcomes did influence moral judgments, but only when participants were under cognitive load or time pressure (i.e., the white bars do not significantly differ from the gray bars within the low load and no time pressure condi- tions, but they do significantly differ within the high load and time pressure conditions). Thus, a more appro- priate interpretation of these data is that cognitive load and time pressure increased utilitarian responding, which stands in stark contrast to the widespread assumption that utilitarian judgments are the result of effortful cognitive processes (Greene et al., 2008; Suter & Hertwig, 2011).
So this is our dual process theory of ethical abilities.

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

 

Dual Process Theories Meet the Puzzles

 
\section{Dual Process Theories Meet the Puzzles}
 
\section{Dual Process Theories Meet the Puzzles}
So far we encountered 2½ puzzles. Can a dual process theory help us to resolve the puzzles?

2½ puzzles

2½ rather than 3 because I'm not sure Structure and Dumbfounding are really distinct puzzles (nor is Emotion Puzzle really one puzzle, I suspect).

[emotion puzzle] Why do feelings of disgust influence unreflective moral judgements? (And why do we feel disgust in response to moral transgressions?)

[structure puzzle] Why do patterns in humans’ unreflective ethical judgements reflect legal principles they are unaware of?

[dumbfounding puzzle] Why are ethical judgements sometimes, but not always, a consequence of reasoning from known principles?

I think it is clear that our core dual process theory cannot solve them. The key is to elaborate on the nature of the processes.
So this is our dual process theory of ethical abilities.

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

‘a dual-process approach in which moral judgment is the product of both intuitive and rational psychological processes, and it is the product of what are conventionally thought of as ‘affective’ and ‘cognitivemechanisms’

\citep[p.~48]{cushman:2010_multi}.

Cushman et al, 2010 p. 48

I like to think of this contrast in terms of demands on scarce cognitive resources.
Here is the link to emotion.
We can think of Cushman et al, 2010 as elaborating on the core dual process theory.

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

And one of the processes is more intuitive than the other.

And the more intuitive process is driven by emotion.

Does this help us with the puzzles?
So far we encountered 2½ puzzles. Can a dual process theory help us to resolve the puzzles?

2½ puzzles

2½ rather than 3 because I'm not sure Structure and Dumbfounding are really distinct puzzles (nor is Emotion Puzzle really one puzzle, I suspect).

[emotion puzzle] Why do feelings of disgust influence unreflective moral judgements? (And why do we feel disgust in response to moral transgressions?)

[structure puzzle] Why do patterns in humans’ unreflective ethical judgements reflect legal principles they are unaware of?

[dumbfounding puzzle] Why are ethical judgements sometimes, but not always, a consequence of reasoning from known principles?

The dual process theory was just designed to resolve this puzzle.

Note: distinguish the core dual process theory from further claims.

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

And one of the processes is more intuitive than the other.

And the more intuitive process is driven by emotion.

distinct processes rely on distinct neural circuits which can be spatially distinguished using fMRI

affective moral processes involve attribute substitution (i.e. heuristics)

You do not need to accept all claims in order to advocate a dual process theory.

dual process vs dual system?

‘We use the term “system” only as a label for collections of cognitive processes that can be distinguished by their speed, their controllability, and the contents on which they operate’

\citep[p.~267]{kahneman:2005_model}.

Kahneman & Fredrick, 2005 p. 267

 

Dual Process Theories: the Process Dissociation Approach

 
\section{Dual Process Theories: the Process Dissociation Approach}
 
\section{Dual Process Theories: the Process Dissociation Approach}

Greene’s dual process theory

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

One process is faster than another.

recall the definition

The outputs of one process are more consequentialist than those of another.

Conway & Gawronsky 2013, figure 1

Note that if we just provide ‘incongruent’ dilemmas, we cannot distinguish all the different possibilities.

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

One process is faster than another.

The outputs of one process are more consequentialist than those of another.

Prediction 1: higher cognitive load will reduce the dominance of the more consequentialist process.

Conway & Gawronsky 2013, figure 3

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

One process is faster than another.

The outputs of one process are more consequentialist than those of another.

Prediction 1: higher cognitive load will reduce the dominance of the more consequentialist process.

Additional assumption: The faster process is an affective process.

Prediction 2: higher empathy will increase the dominance of the less consequentialist process.

Missing additional assumption needde!

Conway & Gawronsky 2013, figure 3

important consequence: if manipulating emotion can selectively influence one of two ethical processes, doesn’t this count as indirect evidence against the causal models on which emotion does not ‘influence’ judgement?
[The idea that manipulating emotion has a selective effect on one process supports the claim that emotion is not affecting (A) scenario analysis, (B) interpretation of question or (C) strength of pre-made judgement. After all, no such hypothesis predicts the selective effect.]
[Also: \citep{gawronski:2018_effects}: ‘(a) sensitivity to consequences, (b) sensitivity to moral norms, or (c) general preference for inaction versus action regardless of consequences and moral norms (or some combination of the three). Our results suggest that incidental happiness influences moral dilemma judgments by reducing sensitivity to moral norms’ (p. 1003).]
Two levels: (1) could do this in principle; (2) let’s see what disgust does to the different factors

conclusion

In conclusion, ...

‘Does emotion influence moral judgment
or merely motivate morally relevant action?’

dual process theory: maybe both

2½ puzzles

2½ rather than 3 because I'm not sure Structure and Dumbfounding are really distinct puzzles (nor is Emotion Puzzle really one puzzle, I suspect).

[emotion puzzle] Why do feelings of disgust influence unreflective moral judgements? (And why do we feel disgust in response to moral transgressions?)

[structure puzzle] Why do patterns in humans’ unreflective ethical judgements reflect legal principles they are unaware of?

[dumbfounding puzzle] Why are ethical judgements sometimes, but not always, a consequence of reasoning from known principles?