Note to reader: If the idea of “AI alignment” rings empty to you, feel free to skip this one, it will be uninteresting. Recently, Gwern wrote a story about an AI taking over the world. While well thought-out and amusing it is unrealistic. However, people have been using it to reinforce their fear of “unaligned AGI killing all humans”, so I think it’s dangerous and it might be worth looking at it line-by-line to see why its premise is silly, and why each step in his reasoning, individually, is impossible.
0 subscriptions will be displayed on your profile (edit)
Skip for now
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.