Can a superintelligence self-regulate and not destroy us? | Curio

Can a superintelligence self-regulate and not destroy us?

5 mins | May 30, 2018

Can a superintelligent AI - one that is smarter than we are and capable of destroying us - choose not to? Joshua Gans, writing in Digitopoly, explains and then assesses the paperclip apocalypse theory, which contests that a superintelligent artificial intelligence with the sole purpose of creating paperclips, for example, will eventually see us as a threat to its paperclip making goals, and annihilate us all.

publisher logo

From Digitopoly