The people building AI systems think they might cause human extinction
This is the field's best guess.
We're working to prevent that. Our mission is ensuring AI development is safe, secure, aligned with human values—and doesn't kill everyone.
Send your counterarguments to our chatbot. Learn why AI would threaten humanity—and what we can do.
Chat nowThe people building AI systems think they might cause human extinction
This is the field's best guess.
Essential reading from researchers, government reports, and expert analysis.
Why we need to pause AI development until we know how to build superintelligence safely—and how to make that happen.
Read nowOfficial report commissioned by the government warning that advanced AI could "pose an extinction-level threat to the human species".
Read reportTechnical breakdown from MIRI on why creating superintelligent AI would lead to human extinction by default—and what would be necessary to avert that.
Read nowA scenario showing how a race to superintelligence could unfold over the next few years.
View scenarioA careful argument for why, with current technology, we can't control what a superintelligent AI would do—and why humanity will end if we build one before we know how to make it safe.
Also endorsed by Yoshua Bengio, Vitalik Buterin, Scott Aaronson, Bruce Schneier, and many leading scientists.
We can still change course. Here's how you can make a difference.