I wanted to like this book, because I think the subject is important and the probability of AGI becoming an existential risk is significantly non-trivial. After listening I’m not sure who this book is for – it’s poorly written, with meandering contrived examples.

The anecdotes and analogies are excruciating: comparing all AI researchers to alchemists, alien birds collecting stones to indicate orthogonality of goals and outcomes and a detailed explanation of Chernobyl disaster and RBMK negative void coefficient to indicate control stability problems (a well-understood problem in control theory).

These are drawn out too long, and do not provide much insight when applied to the challenges posed by nascent AGI.

The authors briefly touch on potential solutions. This seems to be a magical One-World-Government with the mandate to nuke any state of non-state actor that exceeds a certain GPU or model size constraint. Such diplomacy seems to be written by academics with no experience (or interest) in how international politics is done.

Instead of buying this book, just listen to a podcast or two with the authors, much more compelling.