92 pages • 3 hours read
Robert M. SapolskyA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Summary
Background
Chapter Summaries & Analyses
Key Figures
Themes
Index of Terms
Important Quotes
Essay Topics
Further Reading & Resources
Tools
Morality is “not only belief in norms of appropriate behavior but also the belief that they should be shared and transmitted culturally” (478).
Laws are in essence rules about morality that call upon logic. This would suggest that morality is primarily based on reason. Various cortical brain regions activate and interact in moral decision making, stressing its conscious cognitive aspect.
The problem with seeing morality as solely reason-based is that most people do not understand why they have made a specific moral judgement, yet they are wholly committed to it. Therefore the “social intuitionist” school of thought suggests our morality is more based on social intuition than rationality: “Moral thinking is for social doing” (481). In moral decision making not only cognitive but also emotional centers of the brain activate. Our moral appraisals are also often instantaneous and quicker than appraisals of non-moral aspects of the same event, which suggests the emotional automaticity of these appraisals. In such, Sapolsky connects how we act and believe morally to what he has previously argued about how we fall into political perspectives, or why and how we conform: They are dual rational and intuitive processes.
As discussed earlier, human children show the rudiments of morality in the idea of fairness. Animals, such as macaque monkeys, crows, and dogs, exhibit frustration when given less reward than a peer for the same task. In experiments in lab settings, chimps regularly propose and accept unfair reward splits, showing they are not concerned with fairness as long as they get something. In natural settings, studies show chimps regularly propose fair splits if there are consequences for not being fair. Other primates also choose fair splits as long as there is no cost to them to do so. In his standard assessment over the last few chapters, Sapolsky grounds human moral behavior in animal correlates.
Damage to the emotional brain regions associated with moral decisions predicts cold, pragmatic reasoning responses to moral situations. For instance, such individuals will express ease in deciding to divert a trolley to kill one individual but save five. Studies have been conducted on two iterations of this “trolley problem”: 1) divert the train away from 5 to kill 1, or 2) push someone onto the track, killing them to stop the trolley and save 5. In condition 1, the dlPFC activates more, indicating reasoning predominates. Condition 2, vmPFC and amygdala activate more, indicating visceral emotion predominates.
People are twice as likely to pull the lever than push the person. This likely has to do with the increased intentionality of the second act. Morality is not only governed by intentionality but also proximity: We are more likely to save a child drowning in a river and sacrifice the $500 suit we are wearing than send $500 to a foreign country to save a child. Different framing contexts, such as priming, produce different levels of morality with the most important context being whether the cheater is ourself: “Cheating is not limited by risk; it is limited by our ability to rationalize the cheating to ourselves” (493). In this discussion of the trolley problem and other immediate factors of morality like proximity, Sapolsky gives us an example of how our moral decision making is contextual, not fixed. As such, he shows us that even if we believe ourselves to be moral absolutists, much of our biology makes us moral utilitarians: Our acts are moral based on the contexts they serve.
Some morals, such as the unacceptability of some forms of murder, are universal across cultures. Others vary between cultures. One domain of variation is in cooperation and competition. In a study using an economic game consisting of donations to a shared pot played by people from various cultures, all people supported punishment of players for low or no public donation but also too much donation. The reason for such “antisocial punishment” (496) is that it inhibits more generous standards that cost each player more in subsequent turns of the game. The lower the social capital in a country, the higher the prevalence of antisocial punishment.
Other cultural factors also impact morality. The degree of market integration within the culture (how much of your caloric intake comes via trade vs. your own actions) positively predicts fairer donations. The larger the society, the more emphasis on punishment for unfair donations. Furthermore, belonging to a worldwide religion, such as Christianity or Islam, predicted more third-party punishment (paying to have someone who acted unfairly punished). This is a factor of the prevalence of these religions in larger-scale societies, where more third-party punishment is necessary for social cohesion.
Other domains of cultural variation in morality are in the prevalence of honor and revenge, such as in the cultures of honor covered in Chapter 9 and collectivist/individualist cultures, where morality emphasizes collective ends for collectivists and individual rights for individualists. Collectivists enforce morality with shame and external judgements by the group. Individualists enforce with guilt: internal, self-judgement. In these discussions of the cultural dynamics of morality, Sapolsky again shows us how morally relative humans really are. He also demonstrates how important cultural conditions are to shaping how we behave and think about how we behave, recalling the overall argument of Chapter 9, which dealt with cultural impacts on behavior.
Morality and Conflict
Most intergroup conflicts, Sapolsky argues, concern whose moral systems are correct. In fact, the structure of morality predicts this, as morality frequently encourages us to be good to our fellow group members and hostile to out-groups. Our psychology evolved to this end, as the entirety of Chapter 11’s Us/Them distinctions shows. As this chapter also shows, our treatment of out-group members improves when we are encouraged to think logically about the situation. Acting morally toward everyone means we allow intuition to rule in situations of conflict with our own group (since we are evolutionarily inclined to be good to them) and reasoning to rule in situations of conflict with out-groups (since we have to manually use higher cognitive processes of the PFC to shut down the automatic response of being hostile to them).
The moral relativity of behavior is nowhere clearer than in lying and deception, which can range from genuinely compassionate to neutral, or even to antagonistic. The human ability to lie is unrivalled. In lying, we tend to rationalize our actions to make them feel less dishonest. Lying requires ToM as well as dlPFC activation, which is triggered when we consider resisting the temptation to lie but also in the calculations needed to strategically maintain the fiction we have created. This makes life very hard for people studying lying in an fMRI.
In one such neuroimaging study on lying, subjects played a trust game, in which they were given a choice between cooperation and selfishness in the amount donated to a pot but asked to disclose to their partner which strategy they would use before doing so. Breaking this promise activates the dlPFC, the anterior cingulate cortex (which responds to situations of conflicting choice) and the amygdala. Furthermore, activation of the ACC but also the amygdala before each round predicted lies, indicating a likely precursor to rationalization of immoral action by creating a disgust response to one’s partner in the game. In other games on cheating, cheaters experience dlPFC and ACC activation when deciding whether to cheat but only dlPFC activation when the decision to cheat was made. Crucially, for individuals who never cheated, there was no activation of the dlPFC and ACC—no need to adjudicate and consider the best response: Cheating is just a no go, and there is no need to think about it. In other words, to eliminate lying we need to train honesty to the point it becomes automatic. This argument recalls Sapolsky’s very early discussion of the PFC in Chapter 2. It also recalls his discussion of neural plasticity expanding brain regions associated with routinized actions in Chapter 5 as well as his discussion of individuals who refused to participate in Asch and Zimbardo’s experiments. All of these examples fit together under the core concept of learning an act to the point of automaticity. When we first start to learn the piano, we must think about every note we are striking: Our PFC labors hard. But eventually, if we practice enough, the brain automates that activity: Now we can play the song automatically without thinking as soon as we are put into that context. Sapolsky argues learning morality—learning not to lie—is exactly the same thing. We must override our baser automatic natures with a new learned automaticity.
By Robert M. Sapolsky