Can a superintelligence self-regulate and not destroy us? | Curio
News and insights read out for you.
10,000+ audio articles. 50+ world-leading publications.

All in 1 subscription.

Digitopoly logo

Can a superintelligence self-regulate and not destroy us?

5 mins | May 30, 2018
story image
Can a superintelligent AI - one that is smarter than we are and capable of destroying us - choose not to? Joshua Gans, writing in Digitopoly, explains and then assesses the paperclip apocalypse theory, which contests that a superintelligent artificial intelligence with the sole purpose of creating paperclips, for example, will eventually see us as a threat to its paperclip making goals, and annihilate us all.
Get unlimited access free for 7 days, then $6.67/month (billed annually)
Get started