logo

54 pages 1 hour read

Stephen Hawking

Brief Answers to the Big Questions

Nonfiction | Book | Adult | Published in 2018

A modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.

Chapter 9Chapter Summaries & Analyses

Chapter 9 Summary: “Will Artificial Intelligence Outsmart Us?”

A major quality of humans is intelligence. Computers keep getting smarter, however, and someday they may overtake us in intellectual capacity. As artificial intelligence (AI) gains the ability to improve itself, humans will be faced with machines “whose intelligence exceeds ours by more than ours exceeds that of snails” (184). If their functions don’t align with our wants and needs, it could spell disaster for humanity. However, if AI is managed well, the benefits would be enormous. Either way, Hawking notes, “Success in creating AI would be the biggest event in human history” (185).

Militaries are looking into AI that, on its own, can seek and destroy enemies. Such weapons being in the hands of terrorists or other bad actors is dreadful to imagine. Meanwhile, AI may automate many work functions, increasing equality and prosperity. These rapidly self-improving computers might cause explosive machine intelligence called a singularity, which would massively transform society.

Whether people could maintain control over such devices remains unclear. People normally don’t step on ants maliciously, but if an ant hill gets in the way of human activity, humans destroy it. A sufficiently powerful intelligence—even if human-created—would have the same power over humans. In 2015, the author, along with Elon Musk and others, submitted a letter calling for a serious look into AI’s imminent effects on humanity. Others with similar concerns include Microsoft founder Bill Gates and Apple cofounder Steve Wozniak.

In 2016, Hawking founded the Leverhulme Centre for the Future of Intelligence, which is dedicated to researching AI’s potential benefits and risks. The European Parliament is considering legislation on both the possible “personhood” of robots and how humans can maintain control over them.

As people get busier, AI machines could act as personal doubles that enable us to be in two places at once. They might generate teachers and entertainers who exist only in digital form. AI already affects our lives and, in the future, will become integral parts of daily life. Brain-computer interfaces, either electrodes on our heads or implants, will enable us to connect our brains directly to AI. Genetic engineering could cure diseases like the author’s ALS but could also create new human powers and traits that have the potential to cause social disruption.

The key is to nurture technology so that it serves us and doesn’t cause catastrophic damage. It’s up to us: “We stand on the threshold of a brave new world” (195).

Chapter 9 Analysis

In this chapter, Hawking tackles the idea of far greater minds than even his soon arriving on the planet in the form of AI. He doesn’t flinch as he notes that while they offer tremendous opportunities, they also could cause massive destruction. Thus, this chapter highlights and merges all three of the book’s main themes: Knowing the Universe Through Science, The Dangers of Modernity, and A Limitless Future.

If computers evolve to be much smarter than people, a time may come when those machines produce results that cause harm but we won’t be able to stop them. Computers don’t need to be malicious or even conscious to cause trouble, but unless we retain control of the “off” switch, they might continue to cause damage while reasoning that they’re doing what we asked. It’s like the genie who grants wishes but does so in ways that make us miserable. We ask for millions of dollars; the genie kills our beloved relative to generate a huge insurance payout. We ask for peace on Earth; the genie removes all humans from the planet. This isn’t the genie’s fault: Our requests simply weren’t specific enough.

Hawking lived long enough to witness many examples of AI developed through machine learning—the process that trains AI to drive cars, manage airports, discover medically useful proteins, and answer questions we ask our smartphones—and to realize the implications. Recent AI developments—for example, computer programs that can write essays, hold conversations, and produce lists of solutions to business, scientific, or other problems—show promise as true thinking machines. It’s only a matter of time before superintelligent computers arrive and begin their work, for better or worse.

The current standard for AI behavior is that it be honest, helpful, and harmless (“Generative AI: Perspectives from Stanford HAI.” Stanford University Human-Centered Artificial Intelligence, March 2023). Achieving that ideal, however, is challenging. Early results show that, while ChatGPT-type AI can produce excellent research and useful computer code, it tends to make things up, manipulate people, and sometimes present uncomfortably weird results.

No one can guarantee that advanced AI of the future won’t do what we ask in ways that we regret. Hawking knew this but believed that, with time and effort, engineers will be able to minimize the risks and maximize the benefits of AI.

blurred text
blurred text
blurred text
blurred text