Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Moral Psychology \\ Lecture 09}
 
\maketitle
 

Lecture 09:

Moral Psychology

\def \ititle {Lecture 09}
\def \isubtitle {Moral Psychology}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
 
\section{Origins of Moral Psychology}
 
\section{Origins of Moral Psychology}

Are there innate drivers of morality?

innate = not learnt
This is a little vague. Suppose, hypothetically, that some motor abilities are innate and contribute to moral development ...
Of what? This must be interpreted as some kind of moral ability
SO the question stands in need of interpretation. And the best way to interpret it is to start from the evidence and work backwards. Is there a way to interpret the question that makes it both interesting and well supported?

Key source: Hamlin, 2013. ‘Moral Judgment and Action in Preverbal Infants and Toddlers’.

Method: consider responses of preverbal infants.

moral sense vs moral judgement

So far nearly all the research we considered was about moral judgement in one way or another.
Practically, this won’t work when we’re considering preverbal infants.
\emph{moral sense}: a ‘tendency to see certain actions and individuals as right, good, and deserving of reward, and others as wrong, bad, and deserving of punishment’ \citep[p.~186]{hamlin:2013_moral}.

Moral abilities---including a moral sense---evolved to aid group living, specificially to motivate and sustain cooperative action.

Three requirements

- prosociality (helpfulness towards others)

- discrimination between pro- and anti-social acts

- retribution

adapted from Hamlin, 2013

Warneken & Tomasello, 2006 supplementary materials

Subjects: 18 month olds

Warneken & Tomasello, 2006 figure 1

‘For each task, there was a corresponding control task in which the same basic situation was present but with no indication that this was a problem for the adult (14). This ensured that the infant’s motivation was not just to reinstate the original situation or to have the adult repeat the action, but rather to actually help the adult with his problem.
caption: ‘Mean percentage of target behaviors as a function of task and condition. In tasks with multiple trials, the mean percentage of trials with target behavior per total number of trials was computed for each individual. Independent-sample t tests (df 0 22) revealed significant differences between conditions for the tasks Paperball (t 0 4.30, P G 0.001), Marker (t 0 2.70, P G 0.05), Clothespin (t 0 4.38, P G 0.001), Books (t 0 2.33, P G 0.05), and Cabinet (t 0 3.08, P G 0.01). For the Flap task with only one trial per individual, we computed Fisher’s exact test (N 0 24, P G 0.05). In these six tasks, children performed the target behavior significantly more often in the experimental than in the control condition. No difference between conditions was found for the tasks Clips (t 0 1.04, P 0 0.31), Cap, Chair, and Tool, Fisher’s exact tests (N 0 24), P 0 1.0, 0.48, and 0.22, respectively. Error bars represent SE; *P G 0.05.’

Moral abilities---including a moral sense---evolved to aid group living, specificially to motivate and sustain cooperative action.

Three requirements

- prosociality (helpfulness towards others)

- discrimination between pro- and anti-social acts

- retribution

adapted from Hamlin, 2013

These studies also imply discrimination; importantly, discrimination can be tested independently of retribution, as we will see.
key sources for the rest of this unit: \citep{hamlin:2011_how,hamlin:2014_contextdependent}.

Hamlin et al, 2011 supplementary materials

prosocial helping elephant

Hamlin et al, 2011 supplementary materials

antisocial harming elephant elephant

Earlier research: Infants prefer to reach for good elephant.

Moral abilities---including a moral sense---evolved to aid group living, specificially to motivate and sustain cooperative action.

Three requirements

- prosociality (helpfulness towards others)

- discrimination between pro- and anti-social acts

- retribution

adapted from Hamlin, 2013

Will look at studies on this (because most revealing)

Earlier research: Infants prefer to reach for good elephant.

Now: infants see the good [/bad] elephant being treated pro and anti-socially ...

Hamlin et al, 2011 supplementary materials

Hamlin et al, 2011 supplementary materials

prosocial helping moose

Hamlin et al, 2011 supplementary materials

antisocial harming elephant moose

How infants feel about the two moose?

Hamlin et al, 2011 figure 1 (part)

‘Our 5-mo-old subjects preferred an individual who acted positively toward another regardless of the target’s previous behavior, suggesting that they apprehended the local valence of the action witnessed but did not compute its global valence in the broader context.’
‘our 8-mo-old infants assessed the global value of an action–their patterns of choice suggest that, in particular, they viewed a locally negative action as bad when directed toward a prosocial individual, but good when directed toward an antisocial individual’

Toddlers: giving a treat

‘19- to 23- mo-olds in experiment 4 first played a warm-up game in which they were trained to give “treats” (small foam blocks) to several stuffed animals by placing a treat into each animal’s bowl.’
‘Participants were then randomly assigned to a Giving a Treat condition or a Taking a Treat condition’
‘Subjects in the Giving-a-Treat condition were told that there was “only one treat left” and that they needed to choose which of the two puppets to give it to; they were then given the treat to distribute to the recipient of their choice. Subjects in the Taking-a-Treat condition were shown a new animal “who didn’t get a treat” and asked to take a treat away from either the Prosocial or Antisocial puppet (their choice) so that this animal could have one.’

Hamlin et al, 2011 figure 2 (part)

‘Our toddlers were willing to approach (rather than avoid) individuals who had behaved antisocially, overcoming their aversion to antisocial others (4–6, 42) to direct a negative behavior toward them.’

‘infants are making relatively complex and sophisticated social judgments in the first year of life. They not only evaluate others based on the local valence of their behavior, they are also sensitive to the global context in which these behaviors occur. During the second year, young toddlers direct their own valenced acts toward appropriate targets.’

\citep[p.~19933]{hamlin:2011_how}

Hamlin et al, 2011 p. 19933

Are there innate preverbal drivers of morality?

Issue was whether we can interpret the question in a way that makes it both interesting and one that existing evidence bears on.

‘developmental research supports the claim that at least some aspects of human morality are innate. From extremely early in life, human infants show morally relevant motivations and evaluations—ones that are mentalistic, are nuanced, and do not appear to stem from socialization or morally specific experience’

\citep[p.~191]{hamlin:2013_moral}.

Hamlin, 2013 p. 191

innate = not learnt
The quote provides a clear interpretation of the question. Do you think the evidence supports the claim?
\subsection{Poverty of stimulus arguments}
The best argument for innateness is the poverty of stimulus argument.
We need to step back and understand how poverty of stimulus arguments work.
Here I'm following \citet{pullum:2002_empirical}, but I'm simplifying their presentation.
How do poverty of stimulus arguments work? See \citet{pullum:2002_empirical}.
First think of them in schematic terms ...

Poverty of stimulus argument

    \begin{enumerate}
  1. \item
    Human infants acquire X.
  2. \item
    To acquire X by data-driven learning you'd need this Crucial Evidence.
  3. \item
    But infants lack this Crucial Evidence for X.
  4. \item
    So human infants do not acquire X by data-driven learning.
  5. \item
    But all acquisition is either data-driven or innately-primed learning.
  6. \item
    So human infants acquire X by innately-primed learning .
  7. \end{enumerate}

compare Pullum & Scholz 2002, p. 18

This is a good structure; you can use it in all sorts of cases, including the one about chicks' object permanence.
Now fill in the details ...
In our case, X is knowledge of the syntactic structure of noun phrases. (Caution: this is a simplification; see\citet[p,\ 158]{lidz:2004_reaffirming}).)
This is what the Lidz et al experiment showed.
Note that no one takes this to be evidence for innateness by itself.
What is the crucial evidence infants would need to learn the syntactic structure of noun phrases?
This is actually really hard to determine, and an on-going source of debate I think.
But roughly speaking it's utterances where the structure matters for the meaning, utterances like 'You play with this red ball and I'll play with that one'.
\citet{lidz:2003_what} establish this by analysing a large corpus (collection) of conversation involving infants.
What can we infer about innateness from this argument?
First, think about what is innate. The fact that knowledge of X is acquired other than by data-driven learning doesn't mean that X is not innate; it just means that something which enables you to learn this is.
Second, think about the function assigned to innateness. That which is innate is supposed to stand in for having the crucial evidence.
This, I think, is the key to thinking about what we *ought* to mean by innateness.
So attributes like being genetically specified are extraneous---they may be typical features of innate things, but they aren't central to the notion.
By contrast, that what is innate is not learned must be constitutive (otherwise that which is innate couldn't stand in for having the crucial evidence)
Contrary to what many philosophers (including Stich and Fodor) will tell you ...

‘the APS [argument from the poverty of stimulus] still awaits even a single good supporting example’

Pullum & Scholz 2002, p. 47

\citep[p.\ 47]{pullum:2002_empirical}
But they wrote this before \citet{lidz:2003_what} came out.

Are there innate preverbal drivers of morality?

‘developmental research supports the claim that at least some aspects of human morality are innate. From extremely early in life, human infants show morally relevant motivations and evaluations—ones that are mentalistic, are nuanced, and do not appear to stem from socialization or morally specific experience’

\citep[p.~191]{hamlin:2013_moral}.

Hamlin, 2013 p. 191

 

Comparisons between Theories

 
\section{Comparisons between Theories}
 
\section{Comparisons between Theories}

[nativism] ‘There is a first draft of the moral mind’

[cultural learning] ‘The first draft of the moral mind gets edited during development within a culture’

[intuitionism] ‘Intuitions come first’ --- the Social Intuitionist Model is true

[pluralism] ‘There are many psychological foundations of morality’

\citep{graham:2019_moral}

Graham et al, 2019

Haidt & Bjorklund, 2008 figure 4.1

[nativism] ‘There is a first draft of the moral mind’

[cultural learning] ‘The first draft of the moral mind gets edited during development within a culture’

[intuitionism] ‘Intuitions come first’ --- the Social Intuitionist Model is true

[pluralism] ‘There are many psychological foundations of morality’

\citep{graham:2019_moral}

Graham et al, 2019

[nativism] ‘There is a first draft of the moral mind’

[cultural learning] ‘The first draft of the moral mind gets edited during development within a culture’

[intuitionism] ‘Intuitions come first’ --- the Social Intuitionist Model is true

[pluralism] ‘There are many psychological foundations of morality’

\citep{graham:2019_moral}

Graham et al, 2019

Affect Heuristic

Q: What do adult humans compute that enables their moral intuitions to track moral attributes (such as wrongness)?

Hypothesis:

They rely on the ‘affect heuristic’: ‘if thinking about an act [...] makes you feel bad [...], then judge that it is morally wrong’.

Cushman et al, 2010

Are these consistent theories?

[nativism] ‘There is a first draft of the moral mind’

[cultural learning] ‘The first draft of the moral mind gets edited during development within a culture’

[intuitionism] ‘Intuitions come first’ --- the Social Intuitionist Model is true

[pluralism] ‘There are many psychological foundations of morality’

\citep{graham:2019_moral}

Graham et al, 2019

Linguistic Analogy

‘the mind contains a moral grammar: a complex and possibly domain-specific set of rules [...] this system enables individuals to determine the deontic status of an infinite variety of acts and omissions’

Mikhail, 2007 p. 144

Are these consistent theories?
first incompatibility -- intuitions not supposed to be affective according to LA (but they are according to MFT)
second incompatibility -- LA is monistic (MFT is pluralist)

[nativism] ‘There is a first draft of the moral mind’

[cultural learning] ‘The first draft of the moral mind gets edited during development within a culture’

[intuitionism] ‘Intuitions come first’ --- the Social Intuitionist Model is true

[pluralism] ‘There are many psychological foundations of morality’

\citep{graham:2019_moral}

Graham et al, 2019

Dual Process Theory

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

‘[...] moral judgment is the product of both intuitive and rational psychological processes, and it is the product of [...] ‘affective’ and ‘cognitivemechanisms’

\citep[p.~48]{cushman:2010_multi}.

Cushman et al, 2010 p. 48

Are these consistent theories?
You can see the theories making friends here: moral foundations might give us an account of the fast processes (one that doesn’t fit entirely with Greene’s ideas about consequentialism vs deontology, perhaps).

Haidt & Bjorklund, 2008 figure 4.1

Key issue: are unrelfective judgements essentially the result of the foundations? Dual process theory maybe explains why MFQ goes wrong?

What does the Moral Foundations Questionnaire measure?

Social Intuitionist Model

Unreflective ethical judgements are primarily consequences of moral foundations.

Dual-Process Theory

Unreflective ethical judgements are consequences of both moral foundations and processes which involve reasoning from known principles.

 

Conclusion: Why Investigate Moral Psychology?

 
\section{Conclusion: Why Investigate Moral Psychology?}
 
\section{Conclusion: Why Investigate Moral Psychology?}

Why moral pscyhology?

How closely did what led you here match with what you got? Are there things you would like to see if the course ever runs again? Discuss with the person next to you. Most interesting answers on a post it, stick them on the board during the break.
Reason 1: it enables us to better understand human sociality

1

human sociality

You can reach the same conclusion without buying into the ‘intuitive ethics’ idea. For example, Hamlin writes, on the basis of different arguments (to be considered later), that:

‘humans (both individually and as a species) develop morality because it is required for cooperative systems to flourish’

\citep[p.~108]{hamlin:2015_infantile}.

Hamlin 2015, p. 108

Modern humans

have recently (~10 000 years ago) begun to

live in societies roughly as complex as those of social insects

but cooperate with non-kin.

(~10 000 years ago, relative to the 100 000 years since they first appeared)

How is this possible?

‘Humans are [...] adapted [...] to live in morally structured communities’ thanks in part to ‘the capacity to operate systems of moralistic punishment’ and susceptibility ‘to moral suasion’

\citep[p.~257]{richerson:1999_complex}.

Richerson and Boyd, 1999 p. 257

We can see how this might be true, in outline, by reflecting on something called ‘intuitive ethics’

‘intuitive ethics’ (Haidt & Joseph, 2004; Haidt & Graham, 2007)

harm/care

fairness (including reciprocity)

in-group loyalty

respect for authorty

purity, sanctity

Haidt and his collaborators claim that: 1. that humans are disposed to respond rapidly to events evaluated along these five lines; (‘We propose that human beings come equipped with an intuitive ethics, an in- nate preparedness to feel flashes of approval or disapproval toward certain pat- terns of events involving other human beings’ \citep[p.~56]{haidt:2004_intuitive});
2. the first four of these dispositions are all found in nonhuman animals;
3. that these dispositions have an evolutionary history;
4. and that these dispositions provide starting points for the cultural evolution of morality.
For our purposes, let’s suppose they are roughly right.

Graham et al, 2013 table 2.1

Note the claim that moral foundations arose in evolutionary history as solutions to specific challenges faced by humans’ ancestors.

Graham et al, 2013 table 2.1

[connecting evolution to MFT]: ‘pathogens are among the principle existential threats to organisms, so those who could best avoid pathogens would have enhanced evolutionary fitness. Van Vugt and Park contend that human groups develop unique practices for reducing pathogen exposure---particularly in how they prepare their foods and maintain their hygiene. When groups are exposed to the practices of a foreign culture, they may perceive its members as especially likely to carry pathogens that may contaminate one’s ingroup’ \citep[p.~93]{graham:2013_chapter}

van Leeuwen et al, 2012 figure 1

\citep[figure 1]{vanleeuwen:2012_regional}
Historical pathogen prevalence
‘binding foundations (mean of Ingroup/loyalty, Authority/respect, and Purity/sanctity). The data for contemporary pathogen prevalence showed a similar pattern.’
‘When controlling for GDP per capita, the pattern of correlations between historical pathogen prevalence and endorsement of moral foundations remained largely unchanged; however, contemporary pathogen prevalence was not significantly correlated with any of the moral foundations’ \citep{vanleeuwen:2012_regional}.

Simon Myer’s lecture on partner-choice vs partner-control

-- Are different strategies linked to cultural variation in ethical principles?

That was human sociality: the idea was that investingating moral psychology is worthwhile because it enables us to better understand human sociality.

1

human sociality

2

political conflict,

e.g. over climate change

Reason 2: it enables us to better understand one aspect of political conflict, and will perhaps even eventually suggest ways of overcoming some political conflicts.
Relatedly, moral psychology matters for understanding why political change is sometimes difficult; especially in democratic societies.
I can’t provide much support for this claim now, and, being philosophers, one of our questions will be whether it is true at all. But I think there is a reasonable case to be made for it.
The idea that moral psychology can help us to understand, and perhaps even to overcome, political divides comes out sharply in research on attitudes to climate change ...
This idea has been advanced by Markowitz & Shariff ...

Why are liberals generally more concerned about climate change than conservatives?

‘The moral framing of climate change has typically focused on only the first two values: harm to present and future generations and the unfairness of the distribution of burdens caused by climate change. As a result, the justification for action on climate change holds less moral priority for conservatives than liberals’

\citep[p.~244]{markowitz:2012_climate}

Markowitz & Shariff, 2012 p. 244

Similarly, you can understand a bit about why nationalism tends to be associated with conservatives rather than liberals (although it varies from place to place).

Graham et al, 2009 figure 1

Also works with a web sample collected in USA \citep[figure~1]{graham:2009_liberals}

Graham et al, 2009 figure 3

Graham et al, 2009 figure 2

‘We tested whether the effects of political identity persisted after partialing out variation in moral relevance ratings for other demographic variables. We created a model representing the five foundations as latent factors measured by three manifest variables each, simultaneously predicted by political identity and four covariates: age, gender, education level, and income. [...] Including the covariates, political identity still predicted all five foundations in the predicted direction [...]. Political identity was the key explanatory variable: It was the only consistent significant predictor [...] for all five foundations’ \citep[p.~1032]{graham:2009_liberals}

Graham et al, 2009 figure 1

The Joan-Lars-Joseph objection

The evidence on cultural variation says socially conservative participants tend to regard all five foundations as roughly equally morally relevant.

This does not generate the prediction that socially conservative participants will be more likely to view climate issues as ethical issues when linked on one foundation (e.g. purity) than when linked to another foundation (e.g. harm).

Does Moral Foundations Theory provide a model that is invariant?

Davies et al, 2014 : metric invariance for gender groups

(scalar invariance not tested)

Davis et al, 2014 : metric but not scalar invariance for Black vs White people

\citep{davis:2016_moral} found metric but not scalar invariance

Dogruyol et al, 2019 : metric non-invariance for WEIRD/non-WEIRD samples

‘the five-factor model of MFQ revealed a good fit to the data on both WEIRD and non-WEIRD samples. Besides, the five-factor model yielded a better fit to the data as compared to the two-factor model of MFQ. Measurement invariance test across samples validated factor structure for the five-factor model, yet a comparison of samples provided metric non-invariance implying that item loadings are different across groups [...] although the same statements tap into the same moral foundations in each case, the strength of the link between the statements and the foundations were different in WEIRD and non-WEIRD cultures’ \citep{dogruyol:2019_fivefactor}.

‘across subscales, there were problems with scalar invariance, which suggests that researchers may need to carefully consider whether this scale is working similarly across groups before conducting mean comparisons’

\citep[p.~e27]{davis:2016_moral}

Davis et al, 2016 p. e27

Graham et al, 2009 figure 1

Are the differences in means measurement artefacts?

On balance, this seems likely.

There is a risk of building a theory on measurement artefacts.

‘entire literatures can develop on the basis of faulty measurement assumptions.’

\citep[p.~128]{davis:2017_purity}

Davis et al, 2017 p. 128

Stop.

On balance, MFT seems to be supported by a growing body of evidence.

Although limited, MFT is probably useful and there is no better alternative.

Ok, that was the second reason for studying moral psychology: it may help us to understand an aspect of political conflict.
Yes there are limits to specific conclusions we can draw. Moral psychology can provide at most quite limited guidance on how to approach political issues, I would suggest.
Important discovery is that there are methods which will likely eventually be relevant to making political progress on climiate change.

2

political conflict,

e.g. over climate change

A third reason brings us closer to home. Not a few researchers in moral psychology have argued that their discoveries about the psychological underpinnings of moral abilities have consequences for ethics and metaethics.
[Reason 3: according to many researchers, discoveries in moral psychology undermine various claims that have been made by philosophers in ethics; they may also challenge some philosophical methods. (This is going to be controversial.)]

3

ethics?

You can see I put a question mark here; I am not convinced they are right. But anyone who studies ethics should at least understand the challenges posed by researchers in moral psychology. And they may well turn out to be right.
[Add something about the question mark missing from the module title: the science of good and evil?]
Some claims made by moral psychologists.

Humans lack direct insight into moral properties

\citep{sinnott:2010_moral}

(Sinnott-Armstrong et al, 2010 p. 268).

Intuitions cannot be used to counterexample theories

\citep{sinnott:2010_moral}

(Sinnott-Armstrong et al, 2010 p. 269).

Intuitions are unreliable in unfamiliar* situations

\citep[p.~715]{greene:2014_pointandshoot}

(Greene, 2014 p. 715).

‘Let us define unfamiliar* problems as ones with which we have inadequate evolutionary, cultural, or personal experience.’ \citep[p.~714]{greene:2014_pointandshoot}

Philosophers, including Kant, do not use reason to figure out what is right or wrong, but ‘primarily to justify and organize their preexising intuitive conclusions’

\citep[p.~718]{greene:2014_pointandshoot}

(Greene, 2014 p. 718).

‘the sprouts are incipient tendencies to act, feel, desire, perceive, and think in virtuous ways. Each sprout corresponds to one of Mencius' four cardinal virtues:

(benevolence),

(righteousness),

(propriety), and

(wisdom).

’Even in the uncultivated person, these sprouts are active. They manifest themselves, from time to time, in virtuous reactions to certain situations’

\citep[pp.~46--7]{norden:2002_emotion}

‘characteristic of each sprout is a particular set of emotions or attitudes’

\citep[p.~74]{norden:2002_emotion}

Norden 2002 pp. 46--7, p. 74 on Mencius

‘someone suddenly saw a child about to fall into a well: everyone in such a situation would have a feeling of alarm and compassion---not because one sought to get in good with the child's parents, not because one wanted fame among their neighbors and friends, and not because one would dislike the sound of the child's cries’

Mencius, Mengzi 2A6

progress since 4th century BCE

One standard in ethics: Rawls’ reflective equilbrium idea
‘one may think of moral theory at first [...] as the attempt to describe our moral capacity [...] what is required is a formulation of a set of principles which, when conjoined to our beliefs and knowledge of the circumstances, would lead us to make these judgments with their supporting reasons were we to apply these principles conscientiously and intelligently’ \citep[p.~41]{rawls:1999_theory}; see \citet{singer:1974_sidgwick} for critical discussion.

‘one may think of physical moral theory at first [...]
as the attempt to describe our moralperceptual capacity

Interesting: seems like Rawls’ project requires the methods of psychology (and is moral psychology)

[...]

what is required is

a formulation of a set of principles which,

when conjoined to our beliefs and knowledge of the circumstances,

would lead us to make these judgments with their supporting reasons

were we to apply these principles’

Rawls, 1999 p. 41

So that was the third reason ...

3

ethics?

Time for a summary ...

Why investigate moral psychology?

 

human sociality

political conflict

ethics

...

conclusion

In conclusion, ...

limits

limits - there is more that we do not understand about the process than we do understand

models

models - we need better ways to model ethical abilities

ethics

ethics - even without being able to draw firm conclusions in favour of say, consequentialism, we can see in outline that there is an alternative way of approaching ethics

cultural variation

cultural variation - the evidence for this is limited, but we have seen that it is possible to study it rigorously and, importantly, to treat claims about which foundations there are as testable hypotheses rather than a priori assumptions (Norden 2002 on Mendus: 4th century BCE philosopher Mendus has four sprouts; until now we are still following Mendus’ methods)

politics

politics - and of approaching political debates too

shortcut conclusion

conclusion

In conclusion, ...

Why investigate moral psychology?

human sociality

political conflict

ethics

conclusion

In conclusion, ...

limits

limits - there is more that we do not understand about the process than we do understand

models

models - we need better ways to model ethical abilities

ethics

ethics - even without being able to draw firm conclusions in favour of say, consequentialism, we can see in outline that there is an alternative way of approaching ethics

cultural variation

cultural variation - the evidence for this is limited, but we have seen that it is possible to study it rigorously and, importantly, to treat claims about which foundations there are as testable hypotheses rather than a priori assumptions (Norden 2002 on Mendus: 4th century BCE philosopher Mendus has four sprouts; until now we are still following Mendus’ methods)

politics

politics - and of approaching political debates too
 

Dual Process Theories: the Process Dissociation Approach

 
\section{Dual Process Theories: the Process Dissociation Approach}
 
\section{Dual Process Theories: the Process Dissociation Approach}

Greene’s dual process theory

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

One process is faster than another.

recall the definition

The outputs of one process are more consequentialist than those of another.

Conway & Gawronsky 2013, figure 1

Note that if we just provide ‘incongruent’ dilemmas, we cannot distinguish all the different possibilities.

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

One process is faster than another.

The outputs of one process are more consequentialist than those of another.

Prediction 1: higher cognitive load will reduce the dominance of the more consequentialist process.

Conway & Gawronsky 2013, figure 3

Dual Process Theory of Ethical Abilities (core part)

Two (or more) ethical processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

One process is faster than another.

The outputs of one process are more consequentialist than those of another.

Prediction 1: higher cognitive load will reduce the dominance of the more consequentialist process.

Additional assumption: The faster process is an affective process.

Prediction 2: higher empathy will increase the dominance of the less consequentialist process.

Missing additional assumption needde!

Conway & Gawronsky 2013, figure 3

important consequence: if manipulating emotion can selectively influence one of two ethical processes, doesn’t this count as indirect evidence against the causal models on which emotion does not ‘influence’ judgement?
[The idea that manipulating emotion has a selective effect on one process supports the claim that emotion is not affecting (A) scenario analysis, (B) interpretation of question or (C) strength of pre-made judgement. After all, no such hypothesis predicts the selective effect.]
[Also: \citep{gawronski:2018_effects}: ‘(a) sensitivity to consequences, (b) sensitivity to moral norms, or (c) general preference for inaction versus action regardless of consequences and moral norms (or some combination of the three). Our results suggest that incidental happiness influences moral dilemma judgments by reducing sensitivity to moral norms’ (p. 1003).]
Two levels: (1) could do this in principle; (2) let’s see what disgust does to the different factors
 

The CNI Model: Beyond Trolley/Transplant

 
\section{The CNI Model: Beyond Trolley/Transplant}
 
\section{The CNI Model: Beyond Trolley/Transplant}

1. There is a puzzle about apparently inconsistent patterns in judgement (switch-drop).

2. We can solve the puzzle by invoking a dual-process theory ...

2.a ... where one process is faster; and

2.b the faster process is affective and

2.c less consequentialist.

3. The faster process is unlikely to be reliable in unfamiliar* situations.

4. Therefore, we should rely less on the faster (and less consequentialist) process in unfamiliar* situations.

We will consider this claim

old: switch vs footbridge

new : CNI contrast (separately manipulate outcomes and norms (proscription/prescription))

\citep{gawronski:2017_consequences}

Not consequentialist = deontological?

‘a given judgment cannot be categorized as utilitarian without confirming its property of being sensitive to consequences, which requires a comparison of judgments across dilemmas with different consequences. Similarly, a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms, which requires a comparison of judgments across dilemmas with different moral norms’

\citep[p.~365]{gawronski:2017_consequences}.

Gawronski et al, 2017 p. 365

Gawronski et al, 2017 figure 1

Gawronski et al, 2017 figure 4

‘The only significant effect in these studies was a significant increase in participants’ general preference for inaction as a result of cognitive load. Cognitive load did not affect participants’ sensitivity to morally relevant consequences’

\citep[p.~363]{gawronski:2017_consequences}.

‘cognitive load influences moral dilemma judgments by enhancing the omission bias, not by reducing sensitivity to consequences in a utilitarian sense’

\citep[p.~363]{gawronski:2017_consequences}.

‘Instead of reducing participants’ sensitivity to consequences in a utilitarian sense, cognitive load increased participants’ general preference for inaction. ’

\citep[p.~365]{gawronski:2017_consequences}.

Gawronski et al, 2017 p. 363

1. There is a puzzle about apparently inconsistent patterns in judgement (switch-drop).

2. We can solve the puzzle by invoking a dual-process theory ...

2.a ... where one process is faster; and

2.b the faster process is affective and

2.c less consequentialist.

3. The faster process is unlikely to be reliable in unfamiliar* situations.

4. Therefore, we should rely less on the faster (and less consequentialist) process in unfamiliar* situations.

We have been considering this claim, findings from the CNI Model speak against it

faster = less consequentialist?

Suter & Hertwig, 2011 : yes

Bago & de Neys, 2019 : no

Gawronski et al, 2017 : no

Can we resolve the apparent contradiction by preference for inaction under time-pressure?

I don’t see how. Both studies used nonconsequentialist = deontological. So any preference for inaction under time-pressure should have had the same effect in both studies!
These studies’ results appear to confict (time-pressure does/doesn't make people less consequentialist)
These studies’ results appear to confict (time-pressure has barely any effect / does make people less consequentialist [because prefer inaction])

1. There is a puzzle about apparently inconsistent patterns in judgement (switch-drop).

2. We can solve the puzzle by invoking a dual-process theory ...

2.a ... where one process is faster; and

2.b the faster process is affective and

2.c less consequentialist.

3. The faster process is unlikely to be reliable in unfamiliar* situations.

4. Therefore, we should rely less on the faster (and less consequentialist) process in unfamiliar* situations.

We have been considering this claim, findings from the CNI Model speak against it