The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma

by Mustafa Suleyman

Other authorsMichael Bhaskar (Primary Contributor)
Hardcover, 2023


Crown (2023), 352 pages


NEW YORK TIMES BESTSELLER * An urgent warning of the unprecedented risks that AI and other fast-developing technologies pose to global order, and how we might contain them while we have the chance--from a co-founder of the pioneering artificial intelligence company DeepMind "A fascinating, well-written, and important book."--Yuval Noah Harari "Essential reading."--Daniel Kahneman "An excellent guide for navigating unprecedented times."--Bill Gates Longlisted for the Financial Times and Schroders Business Book of the Year Award * One of Kirkus Reviews' Most Anticipated Books of the Fall We are approaching a critical threshold in the history of our species. Everything is about to change.    Soon you will live surrounded by AIs. They will organise your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy.    None of us are prepared.   As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the centre of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies.    In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side, the threat of overbearing surveillance on the other.    Can we forge a narrow path between catastrophe and dystopia?   This groundbreaking book from the ultimate AI insider establishes "the containment problem"--the task of maintaining control over powerful technologies--as the essential challenge of our age.… (more)

User reviews

LibraryThing member aadyer
An interesting and very much one-sided look at the development of technology and power in the next century. It may well be that the cautions and concerns that are raised by Soliman are well founded, but he does at least make the effort to look at the other side. A very interesting book that
Show More
probably needs wider Reading. Worth looking at.
Show Less
LibraryThing member Zare
This is a very interesting book about coming technological advances and effect it will have on our lives. Book is written from the perspective of the person who pioneered the AI reasearch by starting up companies that worked on the bleeding edge of this technological advancement. As such, for me
Show More
book echoes the despair of business man trying to calm everybody down that AI and other emerging technologies are not so dangerous but they need to be controlled and contained [because they are dangerous] - but please do not take it as if they are inhererently bad. And I have to admit I was turned off by a parlor trick used in the book opening (more on this below).

I have to admit that book is organized in a very lecture-like way. As discussion unfolds, all the questions that I was coming up were answered in detail in follow up chapters (for example - how to verify the data provided to the AIs (more on the terminology below) and how data is used for decision making and by whom).

View given by the author is, in the end, rather grim and, as it seems to be the case with almost every science-subject book these days, full of contradictions. To name the few - AIs are powerful tools, they are already here, no they are coming in few decades, no they will be with us very soon (etc); population growth is big problem followed by Chinese scientists predicting they will have bare 600 million people by 2050 due to the age alone (I mean what? Do you guys talk between yourself at all?); Gain of function is benign (what? then why do we look at people bashing people on the head with rocks as murderers - they are actually proving that adding speed and force to heavy objects can cause death, right? I mean why is this different from combining deadly pathogen with high level of reproduction?); various details on previous epidemics that show last one was not any different in any way but that society's capabilities have completely diminished in the meantime; and so on.

While author does note that true AI is long way coming, what is treated as AI in the book (with use of all of the buzzwords as LLMs etc) is something that is with us for at least last 60 years - expert systems. Only difference being high power modern hardware used - this, coupled with sensationalism that became dominant in our news and media and natural approach to antropomorphize tools and environment, causes us to think something is "alive".

I think that author has covered the ground pretty well. I guess to be "modern" book, Ukraine war stories (will try to be polite here because i just dont have strength .... to discuss this) had to be mentioned (this is third book I have read that wont age well in regard to this). There are some other elements that show rather naive approach to life and the way technology propagates (I guess when one runs tech company everybody else is looked at through the same lense), but in general I agree that nation-state is the only mechanism capable of absorbing incoming changes.

Problems I see are more related to the society, and current state of affairs does not make me an optimist.
First thing is globalism - this proved to be a very dangerous concept that started as economic booster but ended as political vice (micro edition is EU that suffers from the same symptoms). This is major danger for nation states because unfortunately it has infiltrated the same and works against it. If you look at the current political situation, we are encountering very strong anti-human wave where some weird moves are made that will have great affect on the population everywhere (take food production and severe limitations imposed on it at the moment). Why? I dont think anybody has idea. Problem is that it seems that goal is to reverse technology for the common people (no or highly restricted travel, limited general mobility, limited access to goods and services) while elites are of course not affected.
Second thing is high level of polarization and echo chamber creation. In time when people are not interested to invest time to actually learn about anything (imagine history!) and are only concentrated on the 2 minute articles and "experts" do we truly need artificial generators for literature and informations? Do we need cliche texts based on thousands of existing works and regurgitated over and over again? And this is just without taking into account what texts are these copies based off? Am I to believe that activists in these companies will not filter the data based on the current trends and what they see as "the right information"? This is very, very scary and will bring back the days of closed groups going through "forbidden" literature to (re)learn various subjects. Not to mention shutting down everyone thinking differently and destroying careers of people for having views of their own (and all of this is highly automated and uses AI driven systems).
Third thing is concept of trusting AI. With the above how can it be trusted?
Fourth is context - are AI systems aware of the context of data they are loaded with? How can system used by government understand what is meant by global domination as a goal? If goal is marked as global domination is it so inconceivable to imagine that use of very drastic measures - from aggressive economic to outright war actions - will be orchestrated in a very fast sequence of steps that will not give party using the AI suggestions time to think and change direction.
Fifth are governments - they are not governments (maybe they are on the local level but not on national level). They are managers, riding on the indisposition of people who decide not to vote because they are disappointed (stupid) to get appointed to offices where they are just pulling off term after term without doing anything for their own nation (for all means and purposes these governments are just part of huge international corporation). Due to their complete lack of skills imagine they will start believing various data models and predictions and start to base their responses on them? Wait, this already happened couple of years back - right? Horror!

In my opinion humanity has become solely consumer society. Removing the actual fruits of labor from the people and just plugging them into weird day loops where they can see they actually do not account for anything wont make people strive for greatness or accomplishments. If government is not prepared to handle more and more people with nothing to do (and no, I do not think that universal income is a solution, rising generations on welfare will only create disastreous problems - people need meaning, need to be active, not to devolve) I am afraid we are looking at very long period of wars because this will be seen as a safety valve to direct all of the people that government sees as a burden. And this has potential to collapse everything.

We live in interesting times (yes, you can look at it also from the perspective of ancient proverbs that are not talking about the good things), lots of things are happening simultaneously, and I have a bad feeling that governments are already using these imperfect AI systems to plan their actions. As a result this just speeds up the process because escalations are the only way forward it seems - nobody is doing any alternative checking.

Emerging technologies are tools, and they are as good as people wielding them. This is where I have little faith - if somebody compared humanity from only 60 years ago versus today I guarantee they would think these are two different species. People need to be "reawakened" and set to learn and tought to take time to learn, not just consume and forever run in the neverending loop under the light of monitor. It takes 9 months to physically develop, go through all physical changes of our species and come to world and then it takes 18 years to mentally develop - go through all mental stages of our species - before one is considered fully grown. Learning about the world around us is not single person purpose, it is generational task - learning from the past is what matters. AI can help to speed up things but only if working on objective set of data and to a degree, people need to be in charge and solving tasks one at the time when consequences are understood. AI's purpose is to bring us quicker to respective generation's start point. All the development does not need to take place in a decade or less time no matter how sexy that might sound - shortcuts are roads to destruction.

Interesting book, highly recommended.
Show Less
LibraryThing member waldhaus1
Disquieting and atomistic at the same time.
Clearly AI is at the center of the author’s world and he believes it is rapidly becoming the center of everyone’s world. While he sees much reason for good coming from AI he sees reason for concern because we neither fully understand it nor control it.
Show More
He suggests trying to develop a means to contain the technology, acknowledging that may be very hard.
The book left me both glad and sorry I am getting old. The next few years may be quite scary and at the same time may well be histories most interesting time. Curiosity about the future is one things that makes me want to live as long as possible.
Show Less




0593593952 / 9780593593950
Page: 0.555 seconds