I’ve been working in the AI space for over three decades and this was a difficult book to actually read all the way through. Not because the content was disturbing and scary, but rather because the arguments rely completely on fictional story telling. There clearly are real risks to the current unfettered AI race accelerating AI capabilities with little to no focus on safety. But this book uses only fictional thought experiments as its core proof on the inevitable danger of mass extinction from ASI. It also puts no efforts in highlighting the dangers of human misuse of AI even without building ASI, which has a much higher net probability of causing serious harm to our civilization.
It’s only suggested solution to solve AI risk is to tell the world to just stop all AI research, which is impossible to implement or even monitor. There’s so many other things we should be recommending to help protect the world from the certain dangers of misuse from bad actors, rogue states and even unintended consequences of existing AI by well meaning labs and nations. But none of that is even discussed.
I applaud the goal of encouraging the world to develop AI in a more safe and responsible way (which is much needed). However the format and content of this book is just raw fear mongering with which won’t and can’t be taken seriously or acted upon. It’ll more likely backfire and cause the AI safety community to be seen as unscientific eccentrics and set back the cause of this important field.
