It Looks Like You’re Trying To Take Over The…

Note to reader: If the idea of “AI alignment” rings empty to you, feel free to skip this one, it will be uninteresting. Recently, Gwern wrote a story about an AI taking over the world. While well thought-out and amusing it is unrealistic. However, people have been using it to reinforce their fear of “unaligned AGI killing all humans”, so I think it’s dangerous and it might be worth looking at it line-by-line to see why its premise is silly, and why each step in his reasoning, individually, is impossible.

Read →