There are things I didn’t like about this book. There are things in the book I disagree with. I’m giving it 5 stars anyway because it’s the first book I’ve read that doesn’t mince words about the reality of the situation. The title may sound like hyperbole, but the thesis of the book is undeniable: If we keep doing what we’re doing, a bad outcome for all humans is extremely likely.
The topic is speculative by nature. We won’t know what superintelligent AI will do until it exists. Similarly, we can’t be certain about the consequences of nuclear war until it happens, or climate change until we continue producing greenhouse gases for several more decades.
However, the empirical evidence we do have all points in one direction. Current AIs pursue goals that the developers didn’t intend. Current AIs attempt to gain power. Current AIs intentionally deceive humans. The commercial AI labs talk about their safety research but neglect to mention that their own research demonstrates that their AIs are misaligned.
I happen to believe it might be possible to create aligned superintelligence that will not cause our extinction, but this is only a possibility – one that seems increasingly unlikely as more advanced models have the same dangerous problems as less advanced models.
Our default assumption needs to be that at some point an AI will have the ability to seize control of the world from humans, and when it does, the outcome will be bad for everyone, including the people who created it. Governments need to require that AI labs demonstrate why this won’t happen, not just take their word for it. While a global research moratorium is unlikely, there’s no reason governments shouldn’t be able to collaborate on this particular issue.
