I am most worried about the structural problems to society from Ai (e.g. unemployment, loss of meaning, etc), but I am also sympathetic to concerns about the extinction threat. However, I find the arguments in this book wholly unconvincing. I agree with the “this could happen”, and I’m not seeing the leap to “this definitely will happen”. The same arguments could imply that we should be afraid to open the front doors to our houses because this could cause earth to combust: I could tell fanciful allegories about why I’m convinced this is the case, about Ape and Scorpion gods not believing this would happen and then being dumbfounded when the inevitable chain of events takes place, and therefore the world will definitely explode when I next open my front door. Whether or not these stories are convincing comes down to a fairly floofy and vague human judgement about whether my stories are believable: I’m saying that Yudkowsky and Soares’ stories are not.
On top of this, I find the aggressive and self-righteous tone off-putting. I wonder if this intentional on the authors’ part to cause fear among policy makers, and if this proves to be effective then I respect the project, because I am genuinely scared by this technology. However I think it likely the opposite will prove true, and serious people will dismiss the doomsday messaging as a cult.
