AI Safety and Governance Fund
501(c)(4) Nonprofit Organization

AI could end humanity

We're working to prevent that. Our mission is ensuring AI development is safe, secure, aligned with human values—and doesn't kill everyone.

People building AI systems think that their creation might cause literal human extinction.

I. Our Tool

Disagree or have questions? We built something for you

We created a chatbot that explains why AI threatens humanity—and what we can do. It makes rigorous arguments in plain English. People are convinced.

Chat with our AI safety bot

Send your questions and counterarguments. No technical background needed.

Start chatting
This approach works
4.46
Average score
(0-10 scale)
5
Median score
(out of 10)
~5%
Ad click-through
(up to 11%)

After chatting, we ask: "How helpful was this?" on a scale from "Not at all" (0) to "Completely changed my mind" (10). On average, users moved nearly halfway toward having their minds completely changed. Our ads achieve 5-11% click-through rates and 75% upvote ratios on Reddit—numbers that are extraordinary for the industry.

We need to scale this

The data shows it works. Help us reach more people before it's too late.

Support our work
II. Essential Reading

The case against superintelligence

Recommended • NYT Bestseller

If Anyone Builds It, Everyone Dies

By Eliezer Yudkowsky and Nate Soares

A rigorous argument for why, with current technology, we cannot control what a superintelligent AI would do—and why humanity will end if we build one before solving this problem.

Max Tegmark (MIT): "The most important book of the decade"
Ben Bernanke: "A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity"
Stephen Fry: "The most important book I've read for years"
Get the book Available at libraries • Free copy on request
III. More Resources

Learn more