AI Safety and Governance Fund
501(c)(4) Nonprofit Organization

AI could end humanity

We're working to prevent that. Our mission is ensuring AI development is safe, secure, aligned with human values—and doesn't kill everyone.

The people building AI systems think they might cause human extinction.

This is the field's best guess.

I. Start Here

Disagree or have questions?

We built a tool that explains the AI extinction threat in plain English. Send your counterarguments and questions. Learn why AI would threaten humanity—and what we can do about it.

Chat with our AI safety bot

No technical background needed. Ask anything.

Start chatting
II. Learn More

Essential reading

From researchers, government reports, and expert analysis.

Recommended Reading • NYT Bestseller

If Anyone Builds It, Everyone Dies

By Eliezer Yudkowsky and Nate Soares

A careful argument for why, with current technology, we can't control what a superintelligent AI would do—and why humanity will end if we build one before we know how to make it safe.

Max Tegmark (MIT): "The most important book of the decade"
Ben Bernanke: "A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity"

Also endorsed by Yoshua Bengio, Vitalik Buterin, Scott Aaronson, Bruce Schneier, George Church, Grimes, and many leading scientists.

Get the book Available at libraries • Free copy on request
III. Take Action

Help prevent extinction

We can still change course. Here's how you can make a difference.

Spread awareness

  • Share these resources with friends and family
  • Contact your representatives about AI safety
  • Volunteer for us
Get started

Fund this work

  • Support public awareness campaigns
  • Enable policy advocacy
  • Helps us scale what's working
Donate