logo

62 pages 2 hours read

Kathryn Schulz

Being Wrong: Adventures in the Margin of Error

Nonfiction | Book | Adult | Published in 2010

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 2, Chapters 3-6Chapter Summaries & Analyses

Part 2: “The Origins of Error”

Part 2, Chapter 3 Summary: “Our Senses”

Chapter 3 introduces Scottish explorer John Ross, who, in 1818, was tasked by the British Admiralty to locate the Northwest Passage, a water route across or around North America, by way of the Canadian Arctic. Upon reaching Baffin Bay, Ross sought to explore Lancaster Sound, the other sounds being impassable. Ross soon discovered that Lancaster Sound ended in a vast expanse of mountains and was thus also impassable; however, his second in command helped determine that the mountains were, in fact, a mirage.

This story illustrates how our senses can fail us. This issue plagued early philosophers, who wondered how we can trust our knowledge if we cannot trust our senses. Protagoras, leader of the Sophist philosophers of the fifth century, believed senses to be the source of knowledge but felt the senses could not be wrong. This viewpoint is known as radical relativism, whereby there is no objective reality, and instead, our sense perception is reality. Schulz states, however, that although we tend to equate sensing with knowing, our perception represents the true essence of error.

Schulz breaks down the process of sensing into two different operations. The first is sensation, whereby the nervous system responds to information in our environment. The second is perception, whereby we process and make meaning of that information. Perception is thus the interpretation of sensation, meaning there is “potential divergence between our minds and the world” (56-57), i.e., the possibility for mistakes. The author concludes that, as a result, we cannot verify our perception’s accuracy.

The author goes on to demonstrate how the mechanisms behind perception operate almost wholly below conscious awareness. For example, the “blind spot” of the eye—where the optic nerve goes through the retina—prevents any visual processing at that point; however, the brain corrects this issue through coherencing, which creates a seamless visual. Because we cannot consciously observe such corrective processes, we cannot take note of errors occurring in those processes, and we feel our perception is infallible.

“Inattentional blindness” is another example of how our perceptual system exposes us to error. This occurs when people are asked to look for something specific and thus develop an inability to see things in general. Inattentional blindness, the author says, is quite useful in that, without it, we would be unable to tune out extraneous information in our environment to focus on a task. Such perceptual phenomena, rather than causing distress, are a side effect of the brain operating properly, although they do not reflect reality as it is.

Illusions, Schulz notes, are “the misleading outcomes of normal […] perceptual processes” (61). Optical illusions represent an example where such errors even cause delight. The author states that these perceptual failures can teach us ways to think about error, showing us how even the most convincing visions of reality can deviate from reality itself. With illusions—although they are mistakes of perception—we are rewarded with a greater satisfaction than we would find in being correct about reality.

Part 2, Chapter 4 Summary: “Our Minds, Part One: Knowing, Not Knowing, and Making it Up”

The author introduces a woman she calls Hannah, who, in 1992, underwent a neurological exam in a hospital in Vienna, Austria. She had suffered a stroke one month earlier, which damaged the near entirety of her visual cortex. When the neurologist began asking Hannah to describe objects around the room, Hannah answered the doctor with confidence. However, Hannah was actually blind but did not know it. This condition is called Anton’s Syndrome, and neurologists suspect that it takes place when the brain mistakes an idea in the mind for a feature that exists in the real world. This phenomenon—whereby the brain essentially invents things without the person’s conscious awareness—is known as “confabulation.” This, the author says, reveals that error has no limits and that all forms of knowledge can ultimately fail us.

Confabulation also occurs in split-brain patients—those whose epileptic seizures are so severe that their brain hemispheres must be surgically separated. When these patients are strategically shown images (images that give the patient a command) on a specific side of their visual field, they respond differently, respective to the side; When images are displayed on the side of the patient’s visual field that is processed by the brain’s right-side hemisphere—which is minimally linguistic—the patient follows the commands. However, when asked to explain why they carried out the behavior, the patient must use the left side of the brain—which is heavily linguistic and involved in crafting narratives—and this hemisphere is unable to communicate with the right hemisphere. The left hemisphere thus makes something up. In other words, the right hemisphere can’t explain to the left hemisphere what was happening, and because the left hemisphere doesn’t have the appropriate information, it confabulates a reason, or narrative, for the patient’s behavior.

The author then turns to memory failure as another example of human fallibility, explaining how the flashbulb memory theory—which holds that we can remember surprising and traumatizing events much more accurately than more mundane ones—was invalidated. In fact, such memories’ accuracy degrades at the same rate as commonplace recollections, and the degradation is so predictable that it is plottable on a graph: the Ebbinghaus curve of forgetting.

Schulz states that while no one is able to capture their memories with perfect detail, we are confident in their accuracy and have “a strong feeling of knowing” that fills us with a conviction of rightness (73). However, as Schulz contends, this recording-technology model of memory—in which the sharpness of the inner picture reflects a memory’s accuracy—is incorrect. Contemporary scientists tend to agree that memory is not a singular function and instead includes multiple processes, such as recalling people, facts, times, and places. Furthermore, these processes are accomplished by several neuroanatomical structures whose purposes range from recognizing faces to emotional processing. Since we are unable to sense or witness our minds reconstructing our memories, we have the same problem as with perception, whereby we cannot experience the process and thus cannot feel where distortions and errors take place.

Schulz’s point is that “we are bad at knowing we don’t know,” and we are quicker to generate theories than we are to admit to or register our own ignorance (82). If, as human tendency indicates, we are to accept our fallibility, then knowledge should be discarded in place of belief—because belief, the author explains, is what we use to navigate the world.

Part 2, Chapter 5 Summary: “Our Minds, Part Two: Belief”

Chapter 5 introduces Alan Greenspan, the former chair of the Federal Reserve, and the issues surrounding the 2008 financial crisis. Greenspan was criticized about the Federal Reserve’s responsibility to stop the lending practices that helped exacerbate the crisis, which reflects how his global model of economics failed. This presented a major ideological crisis for those involved in understanding and directing the economy. The problem, says Schulz, was Greenspan’s unwavering belief that the markets could regulate themselves.

The author uses this example to demonstrate that our self-created models of the world are not knowledge, but beliefs—and beliefs can fail us. Belief, she says, is “inextricable from our identities” (95), which is the reason being wrong can be so jarring to our sense of self.

Schulz explains that the reason we have grown so adept at creating theories that reflect our beliefs is that we must be able to adapt and respond to what is happening in our environment. Evidence even suggests that babies as young as seven months old already theorize about fundamental physical properties. Theorizing, she argues, enables us to discover and speculate about what is hidden from us, including what goes on in others’ minds.

Naïve realism is the notion that our beliefs, instead of being constructed, actually reflect the truth. The author uses children under the age of five as an example of individuals whose beliefs about the world do not deviate from the world as it truly is. However, as Schulz contends, the world is filled with things we are unable to directly experience—such as the structure of molecules, the infrared spectrum, and other dimensions. Developmental psychologists have come to acknowledge that as children get older, they form a representational theory of mind, which holds that the mind makes sense of the world through idiosyncratic means. As such, these children eventually learn: Beliefs about the world can diverge from the world as it is, others’ beliefs can diverge from yours, others lack certain knowledge you may have, and you might lack information that others possess.

The author goes on to present the “’Cuz It’s True Constraint” to explain why we act like our beliefs are infallible (104). The principle’s basic tenet is that we have our convictions due to our belief that those convictions are supported by facts: “I must believe that I believe it ’cuz it’s true” (105). This constraint’s primary stipulation is that it only applies to our own beliefs, not to those of others, as we easily and quickly see self-serving and biased motivations in other people’s beliefs. We, on the other hand, downplay or ignore the self-serving facets of our own convictions, which is what psychologists call the “bias blind spot.” Due to this phenomenon, we can see into our own minds but not into anyone else’s.

This phenomenon also spurs the “Ignorance Assumption,” in which we believe that our convictions are based on facts while those who disagree with us simply lack the correct information. The “bias blind spot” phenomenon also results in the “Idiocy Assumption,” where we believe that those opposing us have the facts but lack comprehension. Finally, this phenomenon produces the “Evil Assumption,” which assumes that people have and understand the facts but willfully ignore them.

Thus, we are hasty to ascribe to our own stories infallible accuracy and dismiss those who disagree with us, making it difficult to accept our own fallibility. Assuming that others are wrong due to their intellectual or moral shortcoming makes us unwilling to confront our own capacity for error.

Part 2, Chapter 6 Summary: “Our Minds, Part Three: Evidence”

Chapter 6 opens with a discussion on how evidence informs our theories. The author asserts that we tend to believe things based on meager evidence, but she also states that this tendency represents the drive behind the machinery of human cognition. This is because the human mind cares less about what is logically valid or theoretically possible; rather, we tend to care more about what is probable based on our previous experiences. We take the things we have and have not experienced in comparable situations as evidence, which is known as “inductive reasoning.” Psychologists and neuroscientists have increasingly come to agree that inductive reasoning—be it conscious or unconscious—underlies all human cognition.

Inductive reasoning enables vast assumptions based on limited evidence. However, while inductive reasoning teaches us about the way the world operates—such as categorizing and the cause-effect relationship—our conclusions from this reasoning are only probabilistic, meaning they are fallible. Being both a successful and imperfect method, inductive reasoning is a process whereby what “makes us right is what makes us wrong” (122).

Although it makes us fast and efficient thinkers, inductive reasoning creates a bias in that it causes us to leap to conclusions that are sometimes incorrect. Another bias this process creates is the “confirmation bias,” which is the tendency to put more stock in evidence that confirms our beliefs. We thus assess evidence based on the theories we have already formed on the basis of earlier evidence. Finally, the “No True Scotsman” fallacy is another cognitive bias, whereby we decide counterevidence has no weight in considering the validity of our beliefs.

Highlighting the importance of developing theories prior to having evidence, the author mentions historian and philosopher Thomas Kuhn’s The Structure of Scientific Revolutions, which argues that doing science without a preexisting theory is impossible. Without such preexisting belief systems, Kuhn argues, we would not know which questions to ask or how to make sense of the answers we obtain. Science, says Schulz, is riddled with examples of how theories have led people to evidence, rather than evidence leading to theories.

Schulz concludes that we must learn to combat our inductive biases by deliberately seeking evidence that contradicts our beliefs and by taking such evidence seriously when we encounter it.

Part 2, Chapters 3-6 Analysis

Chapters 3-6 raise some important perspectives on the issue of erring. While the first two chapters briefly explored philosophical and literary history, these chapters focus on some contemporary scientific understandings and the psychological, neurobiological underpinnings of error. The first of these examples concerns the ways our senses can fail or even lie to us. Schulz explains that despite believing that our senses register reality, the interpretation of sensation, which is perception, leaves room for error to creep in, meaning our perceptions are basically impossible for us to verify. The author uses the story of John Ross and his mistaken perception of a mountain range that was actually a mirage as an example. Other perceptual phenomena outside of our conscious awareness include coherencing and “inattentional blindness,” in which our brains trick our perception in ways that are advantageous to us, such as by creating a seamless visual experience and by allowing us to focus on the task at hand. As these processes are outside of our conscious experience, however, we do not register them and thus believe we cannot be incorrect—another lesson perception can teach us about erring. Finally, the author raises illusions in relation to errors, in which false impressions of reality can actually inspire delight, showing that mistakes of perception can be even more rewarding that if we experienced reality as it is.

While the author observes the processes of error, she also emphasizes the human vulnerability to error and even our tendency toward it. For example, humans tend to create models of the world that are made up of their own fallible experiences and perceptions. Schulz explains that while we have come to be so adept at creating such theories in order to respond to our environment, generating theories based on our beliefs highlights the fallibility of the human mind. Consider Hannah and the split-brain patients, who show how the mind confabulates to compensate for what it does not know. Memory failure is also an example of how the unconscious process of memory reconstruction leaves room for errors and distortions. Finally, the biases introduced in these chapters—including leaping to conclusions, the confirmation bias, and the No True Scotsman fallacy—demonstrate the ways in which our minds warp information to fit into the theories we hold. These processes of inductive reasoning leave us vulnerable to error because, as Schulz states, “we are bad at knowing we don’t know” (82), and we tend to generate theories to make up for our ignorance.

However, the author ultimately maintains her optimistic stance: She not only details the defects of these cognitive processes, but she highlights their merits, pointing out that inductive reasoning—being based on probability—demonstrates the successful operation of human cognition in its ability to allow us to draw sweeping conclusions based on limited evidence. Although the process can lead to error, it also affords formidable insight. Using Thomas Kuhn’s perspective, Schulz argues that without preexisting ideas about the world, we would not know what to ask or how to interpret the information we obtain. Consequently, Schulz’s point is that theorizing is an important function of the human mind, and although it can result in errors, it also enables us to operate successfully in the world.

blurred text
blurred text
blurred text
blurred text