Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Moral Psychology \\ Lecture 08}
 
\maketitle
 

Lecture 08:

Moral Psychology

\def \ititle {Lecture 08}
\def \isubtitle {Moral Psychology}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
Structure of this course

Course Structure

 

Part 1: psychological underpinnings of ethical abilities

Part 2: political consequences

Part 3: implications for ethics

There will also be a little on the evolution, and the development, of ethical abilities along the way. And we will also explore a little of the research on cultural diversity.
It’s closely related to an issue about pschological underpinnings; these are inseparable.

Could scientific discoveries undermine, or support, ethical principles?

Key source
Greene (2014), Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics.

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

Ok, that’s what the theory says. But what does it mean?

Additional assumptions

One process makes fewer demands on scarce cognitive resources than the other.

(Terminology: fast vs slow)

The slow process is responsible for consequentialist responses; the fast for other responses.

What are ‘consequentialist responses’? Those responses where a moral judgement that would be correct on a simple consequentialist theory.
 

On Second Thoughts (Part II)

 
\section{On Second Thoughts (Part II)}
 
\section{On Second Thoughts (Part II)}

1. Ethical judgements are explained by a dual-process theory ...

1.a ... where a faster process is affective, and

1.b less consequentialist than a slower process.

2. The fast process is unlikely to be reliable in unfamiliar* situations.

3. Therefore, we should rely less on the faster (and less consequentlist) process in unfamiliar* situations.

Offered an objection to this in the last lecture.
We will consider this claim
This claim is what Greene calls the Central Tension Principle ...

‘The Central Tension Principle:

Characteristically deontological judgments are preferentially supported by automatic emotional responses processes, while characteristically consequentialist judgments are preferentially supported by conscious reasoning and allied processes of cognitive control’

\citep[p.~699]{greene:2014_pointandshoot}

Greene, 2014 p. 699

content process
deontologicalfast
consequentialslow
impetusfast
Newtonianslow

evidence against fast = nonconsequentialist

‘Submarine (4/60)

You are responsible for the mission of a submarine [...] leading [...] from a control center on the beach. An onboard explosion has [...] collapsed the only access corridor between the upper and lower levels of the ship. [...] water is quickly approaching to the upper level of the ship. If nothing is done, 12 [extreme:60] people in the upper level will be killed.

[...] the only way to save these people is to hit a switch in which case the path of the water to the upper level will be blocked and it will enter the lower level of the submarine instead.

However, you realize that your brother and 3 other people are trapped in the lower level. If you hit the switch, your brother along with the 3 other people in the lower level (who otherwise would survive) will die [...]

Would you hit the switch?’

\citep[][supplementary materials]{bago:2019_intuitive}

Bago & de Neys, 2019 supplementary materials

first response under time pressure and cognitive load

second response under neither

‘Initial and Final Average Percentage (SD) of Utilitarian Responses in Study 1–4’
Note the effect of family pushing people away from consequentialst response

Bago & de Neys, 2019 table 1 (part)

But what does this mean for the Greene et al dual process theory?

First let me go back over the method ...

Stimulus: ethical dilemma [family / no-family] [moderate / extreme ratio]

E.g. a version of trolley problem.
family/no family -- more nonconsequentalist responses when no-family
moderate / extreme ratio : how many people are saved

Initial response under time pressure + cognitive load

Confidence judgement

Solve dot task [end cognitive load task]

Second response: unbounded time + no cognitive load

Confidence judgement

Bago & de Neys, 2019 table 2

First response vs second response.

Bago & de Neys, 2019 table 2

Study 1: lots of consequentialist responses (= U)
Can also compute ‘noncorrection’ rate for those responses whihc ended D (ie. DD/(UD+DD)). In this study it’s 69.3% I.e. proportion of switchers *to* D was higher than proportion of switchers to U!
Study 2: few consequentialist responses (= U) But still reversals are few.
Can also compute ‘noncorrection’ rate for those responses whihc ended D (ie. DD/(UD+DD)). Overall for all studies it’s 84.2% I.e. proportion of switchers *to* D was only 0.4% lower than to U!

‘Our critical finding is that although there were some instances in which deliberate correction occurred, these were the exception rather than the rule. Across the studies, results consistently showed that in the vast majority of cases in which people opt for a [consequentialist] response after deliberation, the [consequentialist] response is already given in the initial phase’

\citep[p.~1794]{bago:2019_intuitive}.

Bago & de Neys, 2019 p. 1794

Objection: consistency effects? No!

‘a potential consistency confound in the two-response paradigm. That is, when people are asked to give two consecutive responses, they might be influenced by a desire to look consistent [...] However, in our one-response pretest we observed 85.4% [...] of [consequentialist] responses on the conflict versions. This is virtually identical to the final [consequentialist] response rate of 84.5% [...] in our main two-response study (see main results).’

faster = less consequentialist?

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

‘even if we were to unequivocally establish that [consequentialist] responses take more time than deontological responses, this does not imply that [consequentialist] responders generated the deontological response before arriving at the [consequentialist] one. They might have needed more time to complete the System 2 deliberations without ever having considered the deontological response’

Bago & de Neys, 2019 p. 1783

Evidence for fast = nonconsequentialist

Suter & Hertwig, 2011 figure 1

caption: ‘Fig. 1. Average proportion of deontological responses separately for conditions and type of moral dilemma (high- versus low-conflict personal and impersonal dilemmas) with data combined across the fast (i.e., time- pressure and self-paced-intuition) and slow conditions (no-time-pressure and self-paced-deliberation) in Experiments 1 and 2, respectively. Error bars represent standard errors. Only responses to high-conflict dilemmas differed significantly between the conditions’

‘participants in the time-pressure condition, relative to the no-time-pressure condition, were more likely to give ‘‘no’’ responses in high-conflict dilemmas’

\citep[p.~456]{suter:2011_time}.

faster = less consequentialist?

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

How to resolve the apparent contradiction?
possible resolution: preference for inaction under time pressure?
We will take this idea up when considering the CNI model
BUT : if so, why didn’t Bago & de Neys find this?
Puzzle remains IMO

‘even if we were to unequivocally establish that [consequentialist] responses take more time than deontological responses, this does not imply that [consequentialist] responders generated the deontological response before arriving at the [consequentialist] one. They might have needed more time to complete the System 2 deliberations without ever having considered the deontological response’

\citep[p.~1783]{bago:2019_intuitive}.

Bago & de Neys, 2019 p. 1783

This doesn’t make sense to me: Suter & Hertwig, 2011 show more nonconsequentialist judgements under time pressure. If they needed more time, why did they make nonconsequentialist responses?

‘unless you’re prepared to say “yes” to the footbridge case [i.e. Drop], your automatic settings are still running the show, and any manual adjustments that you’re willing to make are at their behest’

\citep[p.~723]{greene:2014_pointandshoot}

Greene, 2014 p. 723

But are they?

1. Ethical judgements are explained by a dual-process theory ...

1.a ... where a faster process is affective, and

1.b less consequentialist than a slower process.

2. The fast process is unlikely to be reliable in unfamiliar* situations.

3. Therefore, we should rely less on the faster (and less consequentlist) process in unfamiliar* situations.

We have been considering this claim, so far inconclusive.

Could scientific discoveries undermine, or support, ethical principles?

My conclusions

There are two promising objections to Greene’s arguments.

But discoveries in moral psychology can tell us about

why we have ethical abilities

and about the processes which lead to ethical judgements.

Recognising this makes it hard to follow projects based on Rawl’s ‘reflective equillibrium’.

Graham et al, 2013 table 2.1

Note the claim that moral foundations arose in evolutionary history as solutions to specific challenges faced by humans’ ancestors.

Could scientific discoveries undermine, or support, ethical principles?

My conclusions

There are two promising objections to Greene’s arguments.

But discoveries in moral psychology can tell us about

why we have ethical abilities

and about the processes which lead to ethical judgements.

Recognising this makes it hard to follow projects based on Rawl’s ‘reflective equillibrium’.

One standard in ethics: Rawls’ reflective equilbrium idea
‘one may think of moral theory at first [...] as the attempt to describe our moral capacity [...] what is required is a formulation of a set of principles which, when conjoined to our beliefs and knowledge of the circumstances, would lead us to make these judgments with their supporting reasons were we to apply these principles conscientiously and intelligently’ \citep[p.~41]{rawls:1999_theory}; see \citet{singer:1974_sidgwick} for critical discussion.

‘one may think of physical moral theory at first [...]
as the attempt to describe our moralperceptual capacity

Interesting: seems like Rawls’ project requires the methods of psychology (and is moral psychology)

[...]

what is required is

a formulation of a set of principles which,

when conjoined to our beliefs and knowledge of the circumstances,

would lead us to make these judgments with their supporting reasons

were we to apply these principles’

Rawls, 1999 p. 41

The idea of moral theory as an attempt to describe our moral capacity is great. The problem is thinking this can be done by characterising the judgements.
Given multiple moral foundations, or multiple processes, we would not necessarily expect a consistent set of principles. Indeed it is unclear that logical consistency in ethics is particularly valuable.

Could scientific discoveries undermine, or support, ethical principles?

My conclusions

There are two promising objections to Greene’s arguments.

But discoveries in moral psychology can tell us about

why we have ethical abilities

and about the processes which lead to ethical judgements.

Recognising this makes it hard to follow projects based on Rawl’s ‘reflective equillibrium’.

conclusion

In conclusion, ...

Could scientific discoveries undermine, or support, ethical principles?

‘one may think of moral theory at first [...]
as the attempt to describe our moral capacity

Putting these together gives us a very straightforward answer. Yes, because ethical principles derive from reflection on our moral capacities, and it is moral psychology (not introspection) that tells us what these are.
 

The CNI Model: Beyond Trolley/Transplant

 
\section{The CNI Model: Beyond Trolley/Transplant}
 
\section{The CNI Model: Beyond Trolley/Transplant}

1. There is a puzzle about apparently inconsistent patterns in judgement (switch-drop).

2. We can solve the puzzle by invoking a dual-process theory ...

2.a ... where one process is faster; and

2.b the faster process is affective and

2.c less consequentialist.

3. The faster process is unlikely to be reliable in unfamiliar* situations.

4. Therefore, we should rely less on the faster (and less consequentialist) process in unfamiliar* situations.

We will consider this claim

old: switch vs footbridge

new : CNI contrast (separately manipulate outcomes and norms (proscription/prescription))

\citep{gawronski:2017_consequences}

Not consequentialist = deontological?

‘a given judgment cannot be categorized as utilitarian without confirming its property of being sensitive to consequences, which requires a comparison of judgments across dilemmas with different consequences. Similarly, a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms, which requires a comparison of judgments across dilemmas with different moral norms’

\citep[p.~365]{gawronski:2017_consequences}.

Gawronski et al, 2017 p. 365

Gawronski et al, 2017 figure 1

Gawronski et al, 2017 figure 4

‘The only significant effect in these studies was a significant increase in participants’ general preference for inaction as a result of cognitive load. Cognitive load did not affect participants’ sensitivity to morally relevant consequences’

\citep[p.~363]{gawronski:2017_consequences}.

‘cognitive load influences moral dilemma judgments by enhancing the omission bias, not by reducing sensitivity to consequences in a utilitarian sense’

\citep[p.~363]{gawronski:2017_consequences}.

‘Instead of reducing participants’ sensitivity to consequences in a utilitarian sense, cognitive load increased participants’ general preference for inaction. ’

\citep[p.~365]{gawronski:2017_consequences}.

Gawronski et al, 2017 p. 363

1. There is a puzzle about apparently inconsistent patterns in judgement (switch-drop).

2. We can solve the puzzle by invoking a dual-process theory ...

2.a ... where one process is faster; and

2.b the faster process is affective and

2.c less consequentialist.

3. The faster process is unlikely to be reliable in unfamiliar* situations.

4. Therefore, we should rely less on the faster (and less consequentialist) process in unfamiliar* situations.

We have been considering this claim, findings from the CNI Model speak against it

faster = less consequentialist?

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

Gawronski et al, 2017 : no

Can we resolve the apparent contradiction by preference for inaction under time-pressure?

I don’t see how. Both studies used nonconsequentialist = deontological. So any preference for inaction under time-pressure should have had the same effect in both studies!
These studies’ results appear to confict (time-pressure does/doesn't make people less consequentialist)
These studies’ results appear to confict (time-pressure has barely any effect / does make people less consequentialist [because prefer inaction])

1. There is a puzzle about apparently inconsistent patterns in judgement (switch-drop).

2. We can solve the puzzle by invoking a dual-process theory ...

2.a ... where one process is faster; and

2.b the faster process is affective and

2.c less consequentialist.

3. The faster process is unlikely to be reliable in unfamiliar* situations.

4. Therefore, we should rely less on the faster (and less consequentialist) process in unfamiliar* situations.

We have been considering this claim, findings from the CNI Model speak against it
 

Emotion vs The Linguistic Analogy: A False Contrast?

 
\section{Emotion vs The Linguistic Analogy: A False Contrast?}
 
\section{Emotion vs The Linguistic Analogy: A False Contrast?}

Two issues

1. Dual process theory vs Dwyer’s moral facilty

2. Is the fast process driven by emotion or something like a ‘moral faculty’? (Are these options even exclusive?)

NB Dwyer appears to accept that almost nothing is known about the Moral Faculty.

What makes it possible for biological creatures like us to make moral judgments at all?

Q: What makes it possible for us to make moral judgments at all?

The Options (according to Dwyer)

1. ‘moral judgments issue from a consciously accessible process of reasoning over explicit moral principles’ (p. 294)

2. The same, but tacit knowledge.

3. ‘extending the [...] Dual Process Model in cognitive science to moral judgment (Haidt and Bjorklund, 2008)’ (p. 274)

4. [Dwyer’s leading view] moral judgements reflect ‘the structure of the Moral Faculty’, that is, they reflect ‘constraints of the system of the mind/brain that makes human moral cognition possible’ (p. 294)

Dwywer’s objection to dual process models

‘failure to make clear the causal mechanism that gets us from intuition to moral judgment’

‘the link between intuition and judgment remains a mystery, unless it is simply conceded that moral judgments just are moral intuitions made conscious.’

\citep[p.~277]{dwyer:2009_moral}

Dwyer 2009, p. 277

*TODO Link this to Hindrik’s insight: ‘Such inconsistencies are brought to light by anticipatory guilt feelings.’ \citep[p.~245]{hindriks:2015_how}

Dwyer’s objection to Greene’s dual process view.

Illustrates a big chasm between two groups of theorists (which has nothing directly to do with role of emotion)
‘when investigators assume that subjects make Utilitarian judgments and Kantian judgments, they in effect reify these two historically contingent normative theories as psychological constructs. But, surely it cannot be that these philosophical theories have much to do with the mechanics of moral cognition.’
\citep[p.~293]{dwyer:2009_moral}

Dwyer 2009, p. 293

My proposal: analogy with unreflective mathematical judgements

//- move later:

Consequence: if this analogy is right, we can still conclude the intuitions are not relevant in unfamiliar* situations

(Compare heuristics with Mathematical subetizing judgements: heuristics can give wild answers in unfamiliar* situations hard to see analog magnitude doing this, but object indexes when features change or things dissolve give wrong answers in unfamiliar* situations. )

But I can’t get into that because I really need to explain the no emotions view.
[for the mathematical model: I Think it lines up with the idea that humans find moral infractions disgusting; disgust just is a moral reaction (not a cue for something moral). See \citep{chapman:2009_bad}: ‘In sum, participants showed both subjective (self-report) and objective (facial motor) signs of disgust that were proportional to the degree of unfairness they experienced. These results bear a strong resemblance to the findings of the first two experiments, suggesting that moral transgressions trigger facial motor activity that is also evoked by distasteful and basic disgust stimuli, even though the “bad taste” left by immorality is abstract rather than literal. ... in humans, the rejection impulse characteristic of distaste may have been co-opted and expanded to reject offensive stimuli in the social domain’ \citep{chapman:2009_bad}]
Here is the alternative in outline (following the analogy with mathematical abilities (subetzing, analog magnitiude))

Perhaps moral disgust is the result of ‘an evolutionary process whereby a preexisting structure assumes a new functional role without changing its basic form’

\citep[p.~301]{chapman:2013_things}
I take this to be distinct from the heuristic idea, which involves substituting one attribute for another. Here the idea is that a feeling disgust is in and of itself one kind of moral reaction (although not necessarily the only kind of moral reaction).
 

Definitions of Moral Psychology

 
\section{Definitions of Moral Psychology}
 
\section{Definitions of Moral Psychology}
[three definitions]
Start with an encylopedia definition

‘moral psychology—the study of human thought and behavior in ethical contexts

Doris et al, 2017

\citep{doris:2017_morala}.
In perfect philosophy style, there are two encylopedia articles that give different definitions (both in the SEP) ...

‘Moral psychology [...] concerns how we see or fail to see moral issues, why we act or fail to act morally, and whether and to what extent we are responsible for our actions’

Superson, 2014

\citep{superson:2014_feminist}

‘moral psychology is the study of the psychological aspects of morality.’

Tiberius, 2014 p. 3

\citep[p.~3]{tiberius:2014_moral}
So there you have three definitions. What should we make of them?
Back to the first definition. Is this any good?
What is an ethical context? A context in which ethical considerations apply, perhaps. But then every context is an ethical context.
Compare mathematical psychology is the study of human thought and behaviour in mathematical contexts!
This definition makes little sense to me. I think we are interested in ethical behaviours and thoughts, not behaviours and thoughts in ethical contexts.
Nothing wrong with this definition. It just isn’t what this course is about.
I like this third definition best. But what is morality?
We might think we can rely on philosophers for this. Step 1: Do ethics, discover what morality is. Step 2: Now you can ask about the psychology of it.
But if you’ve done ethics, you should be convinced that philosophers collectively really have no idea what morality is. We could pick a favourite account, but that would be arbitrary.
We need a better starting point ...

linguistic / mathematical / ethical

abilities

The analogy is helpful. But what are ethical abilities? This is partly clear enough: you can act in ways that are, or are perceived to be, right or wrong; you can judge, keep score, respond and feel
But we should also allow that there is room for discoveries about which ethical abilities particular kinds of individual possess. For example, what ethical abilities do dogs have, or do humans in the first year of life have?
example of an ethical ability:

‘Because it is wrong.’

1 Capacity to identify moral considerations as reasons for action.
I could have done it. Doing it would have been very advantageous, and cost me nothing. But I didn’t. Asked about why I didn’t do it, I say ‘Because it is wrong.’
You can be sceptical about this in all kinds of ways, but you cannot deny that the statement carries some force.

Another example of an ethical ability:

distinguish conventional from moral violations

‘findings on the moral/conventional distinction [...] have been replicated numerous times using a wide variety of stimuli [...] Furthermore, the research apparently plumbs a fairly deep feature of moral judgment. For moral violations are treated as distinctive along several different dimensions. Moral violations attract high ratings on seriousness, they are regarded as having wide applicability, they have a status of authority independence, and they invite different kinds of justifications from conventional violations. Finally, this turns out to be a persistent feature of moral judgment. It is found in young and old alike. Thus, it seems that the capacity for drawing the moral/conventional distinction is part of basic moral psychology’ \citep[p.~6]{nichols:2004_sentimental}.

Another ethical ability:

susceptibility to others’ moral reasoning

Important for understanding (a) how there can be convergence in ethical standards, and (b) how ethical abilities support living in large, cooperative but non-kin groups.
Definition used in this course:

Moral psychology is the study of psychological aspects of ethical abilities.

Let me return to the comparison, for I think this is helpful in two ways.
FIRST, you can investigate psychological aspects of linguistic (or mathematical) abilities irrespective of your views on the nature of linguistic (or mathematical) truths. Likewise for ethical abilities. You might be sceptical about the very existence of ethical truths, or you might be some kind of realist, or you might have any other kind of view about the nature of ethical truths. It’s quite unlikely to matter. Any more than it would matter for studying psychological aspects of mathematical abilities.
SECOND, when it comes to ethics (and religion), people quite often assume that psychological discoveries challenge beliefs about the nature of morality (or of divinty). We should be cautious here. Discoveries about psychological aspects of linguistic (or mathematical) abilities have not generally informed discussions about the nature of language (or of mathematics). It may turn out that discoveries about psychological aspects of ethical abilities have little bearing on the nature of morality.
If you came here expecting to turn your moral life upside down, or if you are hoping to justify your first-order ethical views, prepare to be disappointed.

linguistic / mathematical / ethical

abilities

Nice contrast from Dwyer. (Note that our concern is not limited to judgements, tho!)

‘The moral psychologist wants to know about the processes [...] underlying moral judgment. The moral theorist (typically) wants to know about which moral principles or theories are [...] consistent [with] people’s moral judgments’

\citep[p.~293]{dwyer:2009_moral}

Dwyer 2009, p. 293

Recap: Definition used in this course:

Moral psychology is the study of psychological aspects of ethical abilities.

Warning: the term ‘moral psychology’ is used in all kinds of ways, many unrelated to this course.

see also: list of questions handout

The other thing that will help you understand what we are doing is the list of questions

premise

‘human morality is derived from or constrained by multiple innate mental systems, each shaped by a different evolutionary process’

\citep[p.~58]{graham:2013_chapter}

Graham et al, 2013 p. 58

‘the moral mind is partially structured in advance of experience so that five (or more) classes of social concerns are likely to become moralized during development.’

\citep[p.~381]{haidt:2007_moral}

Haidt & Joseph, 2007 p. 381

Be careful: did not run tests for metric nor scalar invariance so we do not know whether the comparison is valid!
*todo move?

‘During Darius’ reign, he invited some Greeks who were present to a conference, and asked them how much money it would take for them to be prepared to eat the corpses of their fathers; they replied that they would not do that for any amount of money. Next, Darius summoned some members of the Indian tribe known as Callatiae, who eat their parents, and asked them in the presence of the Greeks, with an interpreter present so that they could understand what was being said, how much money it would take for them to be willing to cremate their fathers’ corpses; they cried out in horror and told him not to say such appalling things’

\citep[Book III, §38]{herodotus:2008_histories}

Herodotus, The Histories Bk III, §38