We're working to prevent that. Our mission is ensuring AI development is safe, secure, aligned with human values—and doesn't kill everyone.
The people building AI systems think they might cause human extinction.
This is the field's best guess.
We built a tool that explains the AI extinction threat in plain English. Send your counterarguments and questions. Learn why AI would threaten humanity—and what we can do about it.
From researchers, government reports, and expert analysis.
Why we need to pause AI development until we know how to build superintelligence safely—and how to make that happen.
Read now →Official report commissioned by the government warning that advanced AI could "pose an extinction-level threat to the human species".
Read report →Technical breakdown from MIRI on why creating superintelligent AI would lead to human extinction by default—and what would be necessary to avert that.
Read now →A scenario showing how a race to superintelligence could unfold over the next few years.
View scenario →A careful argument for why, with current technology, we can't control what a superintelligent AI would do—and why humanity will end if we build one before we know how to make it safe.
We can still change course. Here's how you can make a difference.