logo

61 pages 2 hours read

Daniel Kahneman, Olivier Sibony, Cass R. Sunstein

Noise: A Flaw in Human Judgment

Nonfiction | Book | Adult | Published in 2021

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Part 3Chapter Summaries & Analyses

Part 3: “Noise in Predictive Judgments”

Part 3, Foreword Summary

Part 3 evaluates “the quality of predictions,” assessing how accurately predictions “co-vary” with outcomes (130). For example, in the practice of hiring for, employers can compare the potential observed in employees by HR departments and their performance a few years later. Accurate predictions will show the estimate of high potential resulting in great achievement. This section also studies the impact of noise on predicting events and evaluating them when they occur.

Part 3, Chapter 9 Summary: “Judgments and Models”

In employment, performance prediction is an essential aspect of hiring. Candidates are often given a numerical rating on attributes such as leadership or interpretative skills, and theoretically the highest scoring candidate ought to be the best for the job. However, a doctoral study of performance prediction based on numeral ratings showed that the percent correlation of a prediction of good performance with actual good performance was 55 percent and therefore “barely better than chance” (134).

In 1954, psychology professor Paul Meehl reviewed 20 studies in which clinical judgment, the standard form of human judgment, was compared with mechanical predictions of outcomes like academic success and psychiatric prognoses. He concluded that mechanical rules were more reliable than human judgment because humans overvalue certain qualities, whereas mechanical prediction distributes equal weight to relevant qualities. The percentage correlation of good potential with good performance in mechanical prediction is 60 percent—five percent higher than clinical judgment. According to the authors, Meehl’s findings show that satisfaction in the quality of human judgment is “an illusion: the illusion of validity” (137). The illusion of validity occurs because of a failure to distinguish between evaluating cases based on available evidence and predicting likely outcomes. Thus, it can feel safe to assert that one candidate is more promising than another but difficult to be certain that this same candidate will be better on the job. Given the uncertainty of the future, mechanical models, which are unbiased by noise, are shown to be better at predicting future success than human judgment in areas as wide-ranging as military service and marital satisfaction.

Noise is a key reason why human judgment is inferior to mechanical models. Although most people treasure the application of complex rules and insights to particular judgments, studies reinforce that “the subtlety is largely wasted,” as “complexity and richness do not generally lead to more accurate predictions” (141). Statistical models of human judgments do not add new information; rather, they subtract and simplify, while ignoring people’s biases about what they think is important. While individuals’ subtleties are valid in select cases, such preferences also give the illusion of validity and often damage the clarity of human judgments. Thus, removing such noise will always improve the accuracy of predictions.

Part 3, Chapter 10 Summary: “Noiseless Rules”

Artificial intelligence and machine algorithms can outperform human judgment because they are not impeded by noise. Models such as psychologist Robyn Dawes’ 1974 example, which gave equal weight to all predictors of success, minimized errors of judgment. This was in part because the model ignored flukes—for example, a singular occasion of outstanding performance—that tended to dazzle and bias human judgment. The success of the model requires knowing which variables are important in achieving a desired outcome.

Frugal models, which use a minimal number of predictors to reach an outcome, produce “impressively accurate predictions in some settings, compared with models that use many more predictors” (151).

In contrast to frugal modeling, machine learning uses multiple predictors and gathers more data about all of them, making connections that no human brain is capable of. Such a sophisticated AI model was applied by economist Sendhil Mullainathan to produce a quantified score of a defendant’s risk of recommitting a crime. Crime could be reduced by up to 24 percent because the people who ended up in jail would be the ones most likely to reoffend. Studies have also shown that the algorithm is less racially biased than human judges, as it incarcerates 41 percent fewer people of color.

Although algorithms have been proven superior to human judgment, they are still underused because of the overestimation of intuition and the mistaken idea of “algorithmic decision making as dehumanizing and as an abdication of their responsibility” (158). However, the authors have found that most people are not inherently opposed to algorithms; rather, humans cease to trust algorithms when they see algorithms are capable of making mistakes. Thus, humans prefer to use their own judgment even when it is proven to be more erroneous than algorithmic judgment.

Part 3, Chapter 11 Summary: “Objective Ignorance”

While the advice the authors draw upon about machine judgment being superior has existed for half a century, many of the corporate executives they interviewed preferred to rely on intuitive judgments. Intuitive judgments possess “an aura or conviction of rightness […] essentially ‘knowing’ but without knowing why” (162). The knowing the subjects describe is the feeling of judgment completion cited in Chapter 4.

However, intuitive judgments often underestimate the objective ignorance of a future situation that cannot be predicted. This leads to over-confidence about under-researched predictions. Moreover, objective assessments of the intuitive judgment’s predictive power mostly do not support the level of confidence placed in them. The authors perceive that people are more likely to rely on intuition when the facts are scarce. Additionally, the fact that mechanical methods offer scant improvements— a mere few percentage points—on human judgment has many executives asking why they should bother using them. The authors state that any increase in validity has a huge value; however, it is difficult for the executives to give up the emotional reward of intuition. This means that while algorithms may be superior, human judgment may never be abandoned.

Part 3, Chapter 12 Summary: “The Valley of the Normal”

The authors discuss the illusion that unpredictable events can still be understood and given a coherent explanation. The 2020 Fragile Families study led by sociology professors Sara McLanahan and Matthew Salganik examined how certain conditions, such as a child being left hungry or a telephone service being cut off, led to future outcomes such as poor scholastic performance. While the study allowed for greater understanding of the details of social deprivation, it was still unable to predict the impact of events on individual lives. This is because objective ignorance about the future will limit the accuracy of predictions.

Causal thinking, which makes up narratives of how particular events, people, and objects affect each other, make people underestimate their objective ignorance and give a false feeling of inevitability. The feeling of inevitability is a severe limitation, as it obscures how things could have turned out differently at each fork in the road. Many events appear normal in hindsight, and individuals retrospectively apply a narrative to them and even state that they could have been anticipated.

While causal thinking, which is based on narratives, avoids the expenditure of excess effort, statistical thinking requires a slower process and the analysis of data in similar cases. Relying exclusively on causal thinking can reinforce the commission of predictable errors, and applying statistical thinking can be a means of eliminating them. While causal thinking is flawed and produces overconfident visions of an uncertain future, it comes more naturally and will prevail when people are faced with the alternative of relinquishing their quest to understand the world. The prevalence of causal thinking also means that the obstacle of noise is eliminated, as the world is reduced to predictable narratives of cause and effect.

Part 3 Analysis

This section explores noise in predictive judgments and assesses the various strategies that humans use to predict an uncertain future. The authors acknowledge that forecasting strategies are essential in a number of industries to ensure that the decisions made today will have good or at least less harmful outcomes for the future. However, human discomfort with uncertainty manifests in a variety of symptoms, including overconfident predictions of the future and underestimating one’s objective ignorance about situations that may turn out in a variety of ways. The authors argue that intuition, that golden child of the new age movement and the alleged staple of many business leaders, is really an “emotional experience (‘the evidence feels right’)” that “masquerades as rational confidence in the validity of one’s judgment (‘I know, even if I don’t know why’) and can lead to the false belief that one is in control of an uncontrollable future (163). This type of thinking is careless, as the thinker considers their situation as unique and ignores statistics about similar events. Their limited view is then more likely to lead to error.

The authors show how hiring practices are fraught with instances where mistakes are made because employers use examples of past performance to determine whether a candidate will be suitable for a position. Employers then create causal narratives whereby they equate past success with the ability to create future success. According to the authors, causal narratives employ a facile logic which makes no space for noise in judgment. Unseen noise cannot be measured and is therefore more prone to wreak havoc on people’s judgments. The fallibility of this judgment is shown in the authors’ demonstration that algorithms are better at choosing employees than the people who have expertise in the office environment. This is because, while humans find stories of flukes irresistible and can be distracted when an average or mediocre candidate pulls off an outstanding feat, the algorithms promote more consistent candidates. While the authors do not unequivocally advocate for replacing humans with machines, they show that different types of mechanical processing, whether frugal or machine learning, can present data in new ways. Seeing the data anew enables individuals to establish distance from their original intuitive decision and find different ways of looking at the problem. Overall, seeing the problem from multiple perspectives will promote a less noisy, more informed decision.

blurred text
blurred text
blurred text
blurred text