Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (2014)

superintel…attempts to build artificial intelligence might fail pretty much completely until the last missing critical component is put in place, at which point a seed AI might become capable of sustained recursive self-improvement. 29

Amazon Link

Bostrom argues that the development of superintelligence is inevitable, and it must be developed in a very careful, considerate, and controlled way—on the very first try—if humanity is to benefit from, or even survive, its presence.

Bostrom discusses various approaches for developing superintelligence. Human eugenics programs, whole brain emulations, networks (the internet gains consciousness), and artificial intelligence are possibilities, with AI as the most likely candidate.

What dangers would a superintelligence present? By their nature, superintelligences would be more capable and powerful than any human (and perhaps all of humanity). More than simply a computer, it would be the dominant force on earth.

Even if a superintelligence was given a seemingly innocuous goal, like making as many paper clips as possible, the results could still be disastrous:

 “The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

— Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence”, 2003

Superintelligence will arrive, Bostrom argues, it will be too powerful to control, and even with seemingly benign goals it could still destroy humanity. Therefore, we must learn how to control the AI, to shape its motivations, and give it the correct values, so that what we care about as humans will be protected, preserved, or enacted by the superintelligence.

Superintelligences present many non-obvious problems. There seem to be no straight-forward solutions to the challenges posed by building what is essentially an amoral god. How can you control the AI, or trick it into controlling itself and adopting our values and preferences? Every provisional solution only generates further unanticipated problems or considerations. The book’s discussion of these problems is very interesting, though you wonder what pitfalls the author and his colleagues—all seemingly brilliant—are forgetting.

It’s difficult to tell if Bostrom is onto a real problem, or one that’s as serious as he claims. The book certainly has an apocalyptic and somewhat grandiose tone; it does, after all, address the fate of humanity. But even if he’s wrong, the book is still interesting and provocative enough to be read simply as a thought experiment.

Ideas per Page: (7/10)1, higher. The singular problem of AI control dominates the book, but there are many interesting twists and digressions.

Related Books: You are not a Gadget by Jaron Lanier, The Second Machine Age by Erik Brynjolfsson and Andrew McAfee

Recommend to Others: Yes, if you are interested in technology, AI, futurism

Reread Personally: No

Quotes:

44 “Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.”

66 “The process of solving a jigsaw puzzle starts out simple—it is easy to find the corners and the edges. Then recalcitrance goes up as subsequent pieces are harder to fit. But as the puzzle nears completion, the search space collapses and the process gets easier again.”

112 “An agent may often have instrumental reasons to seek better technology, which is at its simplest means seeking more efficient ways of transforming some given set of inputs into valued outputs.”

115 “An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to otherwise permanently and drastically destroy its potential for future desirable development.”

181 “Once villainy has had an unguarded moment to sow its mines of deception, trust can never set foot there again.”

211 “The obvious reason for building a superintelligence is so that we can offload to it the instrumental reasoning required to find effective ways of realizing a given value.”

Reread: No


1 A measure of the number of new or distinct ideas introduced per page. 10/10 would be conceptually dense, like a textbook. 1/10 would be almost completely fluff.

 

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s