I am a machine learning engineer, and I am very sympathetic to the concern of existential risk. Yudkowski asserts repeatedly that his arguments proceed from mathematical fact to lead to what is very close to certainty, analogously to how you can be close to certain that a lottery ticket will lose.

The thing is, some of his assumptions are far from certain, even among people who share his concerns. To paper over this, he presents emotionally argued “parables” in which he makes people who disagree with him look like idiots. This is a rhetorical red flag; a more compelling case would need no such contrivance.

The consensus position among experts is that existential risk is real and that the AI arms race is as insane as anything from the cold war–but that risk takes many forms, and it is anything but clear that goal-directed behavior leads inexorably to mass homicide.