This is the most important book I’ve read in years. To be clear, this isn’t about the danger of AI in general or Chatgpt. This is about a scenario that we could be getting very close to, where someone builds an Artificial Super Intelligence (ASI) and how that could lead – very quickly – to humanity’s end. This is not Sci-Fi. And it’s not 1,000 or even 100 years from now – ASI could be developed in the next ten years. And the authors contend, once it’s developed, there may be no way of ever stopping it. They put the danger of an ASI up there with nuclear war as the fastest of human extinction scenarios. Seriously, read this book! It’s put forward in a way that even non-techie people like me can understand. And if you have adult children or kids old enough to deal with the implications, suggest they read it, too. This is not a Left/Right issue. This is more of a World War 2, all hands on deck moment. A situation where only a global consensus can prevent the worst case scenario. Even if you think this sounds alarmist, read the book anyway and consider what these AI experts are desperately trying to warn us about.