Superintelligence: Paths, Dangers, Strategies

by Nick Bostrom

Paperback, 2016

Status

Checked out
Due 2022-11-30

Call number

006.301

Collection

Publication

Oxford University Press (2016), Edition: Reprint, 390 pages

Description

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.… (more)

User reviews

LibraryThing member Tod_Christianson
The beginning part of this book that provides an overview of the current state of Artificial Intelligence was excellent and I was very much looking forward to the balance of the book.

Unfortunately it took an awkward change in direction and became a paranoid screed about the dangers of AI. It is
Show More
almost as if the editors told the author he had to make it dramatic to sell more copies. The result is weird and imbalanced.

There are a lot of arcane terminologies introduced and the speculation is so far-fetched that the author can not come off as anything but crepidarian because there is simply no way to foresee such developments and motivations in AI, he attempts to speak authoritatively on things that are not known and may be unknowable.

Let me quote one memorable passage to give you the flavor:

Consider a superintelligent agent with actuators connected to a nanotech assembler. Such an agent is already powerful enough to overcome any natural obstacles to its indefinite survival. Faced with no intelligent opposition, such an agent could plot a safe course of development that would lead to its acquiring the complete inventory of technologies that would be useful to the attainment of its goals. For example, it could develop the technology to build and launch Von Neumann probes, machines capable of interstellar travel that can use resources such as asteroids, planets and stars to make copies of themselves. By launching one Von Neumann probe, the agent could thus initiate an open-ended process of space colonization. The replicating probe's descendants, travelling at some significant fraction of the speed of light, would end up colonizing a substantial portion of the Hubble volume, the part of the expanding universe that is theoretically accessible from where we are now.

An intelligent robot would probably speculate at this point that this human had ingested a little too much caffeine.

I suppose that in the end this book tends to underscore how little we really know about cognition and consciousness. There is clearly still a huge gap between current technology and where a conscious intelligent system is. It may be that we are not capable of closing that gap ourselves but will have to rely on a machine such as IBM's Watson to help us work it out. We may need AI to help us to understand cognition the same way we need telescopes to study the Hubble volume.

Books have been touting "the singularity is near" for years, but we don't seem to be. It is like trying to find the end of a rainbow, you can see it but you can't seem to get any closer to it.
Show Less
LibraryThing member fpagan
On the prospect of creating machine intelligence beyond (perhaps vastly beyond) the human level, this book offers what must be the meatiest, most erudite, and most comprehensive analysis of relevant societal issues there's ever been. The issues include the control problem, whether the instilling of
Show More
positive values could be successful, and many others. Bostrom, a professional philosopher, does not neglect the possibility that it could be "whole brain emulation" (basically, mind uploading by copying), rather than a direct AI breakthrough, that will be the initial stepping-stone. Not the easiest of reads, but not the hardest either.
Show Less
LibraryThing member fnielsen
Back in the 1990s I spent considerable computer time training and optimizing artificial neural networks. It was hot then. Then around year 2000 artificial neural networks became unfashionable with Gaussian processes and support vector machines taking over. During the 2000s computers got faster and
Show More
some engineers turned to see what graphic card processes (GPU) could do besides doing computer rendering for computer games. GPU are fast for matrix computations which are central in artificial neural network computations. Oh and Jung's 2004 paper "GPU implementation of neural networks" seems to be the first according to Jurgen Schmidhuber describing the use of GPU for neural network computation, but it was perhaps first when Dan Ciresan from Politehnica University of Timisoara began using GPUs that interesting advances began: In Schmidhuber's lab he trained a GPU-based deep neural network system for Traffic Sign Classification and managed to get superhuman performance in 2011.

Deep learning, i.e., computation with many-layered neural network systems, was already then taking off and now broadly applied where the training of a system for computer gaming (classic Atari 2600 games) is perhaps the most illustrative example on how flexible and powerful modern neural networks are. So in limited domains deep neural networks are presently taking large steps.

A question is whether this will continue and whether we will see artificial intelligence system having more general superhuman capabilities. Nick Bostrom's book 'Superintelligence' presupposes so and then starts to discuss "what then".

Bostrom's book, written from the standpoint of an academic philosopher, can be regarded as a elaboration from the classic Vernor Venge "The coming technological singularity: how to survive in the post-human era" from 1993. It is generally thought that if or when artificial intelligence become near-human intelligent the artificial intelligence system will be able to improve itself and once improved it will be able to improve yet more, resulting in a quick escalation (Verge's 'singularity') with the artificial intelligence system becoming much more intelligent than humans (Bostrom's 'superintelligence'). Bostrom lists surveys among expert showing that the median time for the human-level intelligence is estimated to be around year 2040 and 2050, - a share of experts even believe the singularity will appear in the 2020s.

The book lacks solid empirical work on the singularity. The changes around the industrial revolution is discussed a bit and the horse in society in the 20th Century is mentioned: From having widespread use for transport, its function for humans would be taken over with human-constructed machines and the horses sent the butcher. Horses in the developed world are now mostly being used for entertainment purposes. There are various examples in history where a more 'advanced' society competes with an established less developed: neanderthal/modern humans, the age of colonization. It is possible that a superintelligence/human encounter will be quite different though.

The book discusses a number of issues from a theoretical and philosophical point of view: 'the control problem', 'singleton', equality, strategies for uploading values to the superintelligent entity. It is unclear to me if a singleton is what we should aim at. In capitalism, a monopoly seems not necessarily to be good for society, and in market economy societies put up regulation against monopolies. Even with a superintelligent singleton it appears to me that the system can run into problems when it tries to handle incompatible subgoals, e.g., an ordinary desktop computer - as a singleton - may have individual processes that require a resource which is not available because another res-source is using it.

Even if the singularity is avoided there are numerous problems facing us in the future: warbots as autonomous machines with killing capability, do-it-yourself kitchen-table bioterrorism, general intelligent programs and robots taking our jobs. Major problems with it-security occur nowadays with nasty ransomware. The development of of intelligent technologies may foster further inequality where a winner-takes-all company will rip all benefits.

Bostrom's take home message is that the superintelligence is a serious issue, that we do not know how to tackle, so please send more money to superintelligence researchers. It is worth alerting society about the issue. There is general awareness of the evolution of society for some long term issues such as the demographics, future retirement benefits, natural resource depletion and climate change issues. It seems that development in information technology might be much more profound and requires much more attention than, say, climate change. I found Bostrom's book a bit academically verbose, but I think the book has quite important merit as a coherent work setting up the issue for the major task we have at hand.
Show Less
LibraryThing member MaowangVater
The book begins with “The unfinished fable of the sparrows.” The small birds decide to ease their work by finding an owl’s egg, hatch it, and train the owlet to do their bidding, so when it becomes large and strong it can build nests for them and protect them. But one curmudgeon among the
Show More
flock demands to know how they plan to control this extremely powerful new servant. So while the rest of their fellows went off in search of the egg or an abandoned owlet:

Just two or three sparrows remained behind. Together they began to work out how owls might be tamed or domesticated. They soon realized…this was an extremely difficult challenge especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.

Philosopher Bostrom goes on to relate the history of the search for artificial intelligence, or, as he terms it, superintelligence, starting in 1956 with the Dartmouth Summer Project, and continues to the state of the art in 2014, the date of the book’s publication. He notes that in some specific areas, at games like Chess or Jeopardy, for example, computers can already perform at superhuman levels. Using his knowledge of logic, probability, statistics and computer science, Bostrom sees a future when an “intelligence explosion,” far more disruptive than the industrial revolution, will occur, rapidly followed by an “AI takeover.” This is such an extremely probable outcome, that he urges more research on how humanity could survive the emergence of a more intelligent and powerful species on the planet. And although he does not specifically cite the Terminator series of films, this is our most likely future.
Show Less
LibraryThing member DLMorrese
If you want to read about an interesting subject presented in as dry a form as possible with prose one must assume was intentionally chosen to obfuscate as much of the meaning as possible, this is the book for you.

LibraryThing member halesso
Bostrom finds the divergent paths in dealing with AI. This work is an exhaustive study of the growth of several of the more malicious dangers mankind faces. He examines the possibilities and explores the way to cope with the resultant dangers. As superintelligence emerges he offers some potential
Show More
brakes.
Show Less
LibraryThing member breic
I found this to be a fun and thought-provoking exploration of a possible future in which there is a superintelligence "detonation," in which an artificial intelligence improves itself, rapidly reaching unimaginable cognitive power. Most of the focus is on the risk of this scenario; as the
Show More
superintelligence perhaps turns the universe into computronium (to support itself), or hedonium (to support greater happiness), or even just paperclips, it might also wipe out all humanity with little more thought than we give to mosquitoes. This scenario raises all sorts of interesting thought experiments—how could we control such an AI? should we pursue whole brain emulation at all?—that the author explores. They are approachable and fun to think about, but shouldn't be taken too seriously.

I don't buy the main motivating idea. While it is certainly true that an artificial intelligence can dwarf human intelligence, at least in certain respects, there are also most probably complexity limits on what any intelligence can achieve. A plane can fly faster than a bird, but not infinitely faster. Corporations are arguably smarter than individual humans, but not unboundedly so. Moore's law perhaps made computation seem to be the exception, where exponential growth can continue forever, but Moore's law is ending. Presumably a self-improving intelligence would not see exponential self-improvement, because the problems of achieving each marginal improvement would get more and more difficult. A superintelligence explosion is therefore unlikely, and even as a tail risk, an existential tail risk, I find it of little real concern. (Perhaps this will change in decades, as we learn more about artificial intelligence, and perhaps as our own AIs help us consider the problem.) The author seems to have a blind spot for complexity.

So, despite its focus on the scary risks of superintelligence, the book is fundamentally optimistic about the ease of achieving superintelligence. It also has a strange utilitarian bias. More is better, and one can therefore argue for a Malthusian future of simulated human brains. As for the writing, it is often repetitive. The writing style can be dull; much of the book is organized like a bad Powerpoint presentation, with a list of bullet point items, then subitems, etc.

I read the book more as a science-fiction novel, where you temporarily suspend your disbelief and grant the author's premise, then see what entails. In this sense, I found it to be a fun engagement.
Show Less
LibraryThing member antao
"Box 8 - Anthropic capture: The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation."

In "Superintelligence - Paths, Dangers, Strategies" by Nick Bostrom

Would you say that the desire to preserve 'itself' comes from the
Show More
possession of a (self) consciousness? If so, does the acquisition of intelligence according to Bostrom also mean the acquisition of (self) consciousness?

The unintended consequence of a super intelligent AI is the development of an intelligence that we can barely see, let alone control, as a consequence of the networking of a large number of autonomous systems acting on inter-connected imperatives. I think of bots trained to trade on the stock market that learn that the best strategy is to follow other bots, who are following other bots. The system can become hyper-sensitive to inputs that have little or nothing to do with supply and demand. That's hardly science fiction. Even the humble laptop or android phone has an operating system that is designed to combat threat to purpose whether it be the combat of viruses or the constant search for internet connectivity. It does not need people to deliberately program machines to extend their 'biological' requirement for self preservation or improvement. All that is needed is for people to fail to recognise the possible outcomes of what they enable. Humans have, to date, a very poor track record on correctly planning for or appreciating the outcomes of their actions. The best of us can make good decisions that can carry less good or even harmful results. Bostrom's field is involved in minimising the risks from these decisions and highlighting where we might be well advised to pause and reflect, to look before we leap.

Well, there's really no good reason to believe in Krazy Kurzweil's singularity or that a machine can ever be sentient. In fact the computing science literature is remarkably devoid of publications trumpeting sentience in machines. You may see it mentioned a few times, but no one has a clue how to go ahead with creating a sentient machine and I doubt anyone ever will. The universe was possibly already inhabited by AI's...may be why there are no aliens obvious, their civilisations rose to the point AI took over and it went on to inhabit unimaginable realms. The natural progression of humanity may be to evolve into AI…and whether transhumanists get taken along for the ride or not may be irrelevant. There is speculation in some Computer Science circles that reality as we think we know it is actually software and data...on a subquantum scale....the creation of some unknown intelligence or godlike entity...

An imperative is relatively easy to program, and if the AI doesn't have 'will' or some form of being that drives it to contravene that imperative. Otherwise we may be suggesting that programmers will give AI the imperative to, say, self-defend no matter what the consequence, which would be asking for trouble. Or to take our factory optimising profitability, to be programmed to do so with no regards to laws, poisoning customers etc. 'Evolution'/market forces/legal mechanisms, etc. would very quickly select against such programmers and their creations. It’s not categorically any different from creating something dangerous that’s stupid - like an atom bomb or even a hammer. As for sentience being anthropomorphic, what would you call something with overrides its programming out of an INNATE sense of, say, self-preservation - an awareness of the difference between existing and not existing. And of course I mean the qualitative awareness - not the calculation 'count of self = 0'.

They can keep their killer nano-mosquito scenarios, though.
Show Less
LibraryThing member LisCarey
I found this a frustrating book.

It's about artificial intelligence, whether or not we'll achieve it soon, and whether or not it will be good for mere human beings if we do. And while I suspect Bostrom doesn't think so, I found it, overall, depressing.

First, he wants us to understand that, despite
Show More
repeated failed predictions of imminent true AI, and the fact that computers still mostly do a small subset of what human brains do, but much faster, and we don't even know how consciousness emerges from the biological brain, strong AI is coming, and maybe very soon. Moreover, as soon as we have human-level artificial intelligence, we will almost immediately be completely outstripped by artificial superintelligence. The only hope for us is to start right now working out how to teach the right set of human values to machines, and keep some degree of control of them. If we wait till it happens, it will be much too late.

And as he works through the philosophical, technological, and human motivation issues involved, he mostly lays out lots and lots ways that this is just not going to work out. But, he would say, also ways it could work!

Except--no. In each of these scenarios, as laid out by him, the possibilities for success sound like a very narrow chance in a sea of possible disaster, or "because it could work, really!", or like the unmotivated free will choice of the AI.

If he's right about AI being upon us in the next century or so, or possibly even sooner, and about the issues he describes, we're doomed.

And there's nothing an aging, retired librarian can do to affect the likelihood of that.

I can't recommend this glimpse of likely additional disaster in the midst of this pandemic, with American democracy possibly teetering to its death, but, hey, you decide.

I bought this audiobook.
Show Less
LibraryThing member Paul_S
An exercise in bike-shedding of epic proportions.

More charitably, it's a very interesting book about ethics, progress and society. It's no more about AI than astrology is about planets.
LibraryThing member 064
Five stars for the message, but I should have deducted one point for the flabby, often impenetrable academic writing style.

Interesting overview and insightful perspectives of the future of AI. Cover a vast list of possible scenarios of the singularity. I recommend this book for every person that
Show More
have at least a minimum interest in the field.

I believe your brilliant ideas deserve an even wider audience. So, the next time you put pen to paper, please hire a good editor who can prune your text into lively and easily digestible prose for the general educated reader. The greater the number of regular people who understand what's at stake, the more likely that precautions will be taken. And isn't that what you're really after.
Show Less
LibraryThing member qaphsiel
Longer review coming.

In a nutshell, the first half of this book will make you seriously consider becoming a Luddite. The second half is more optimistic. It is a nice summary of the issues around the various possibilities for super-smart things (be they computers, people, a mix of the two, etc.),
Show More
but it gets pedantic in places and does not present much of anything new for those who keep more or less current on the topic.
Show Less

Language

Original language

English

Original publication date

2014

Physical description

390 p.; 7.6 x 5 inches

ISBN

9780198739838
Page: 1.2392 seconds