AI might literally kill everyone if anyone's allowed to make it superhumanly smart before it's known how to do that safely. To prevent extinction, we're building institutional support and public awareness for AI safety. It's working; we should do more of it.
Explaining the problem to decision-makers creates allies who can advocate for treaties and legislation. Explaining it to the general public makes the threat salient and increases the cost of interference from irresponsible actors.
The AI industry, guided by terrible incentives, already invested hundreds of millions of dollars in super PACs to fight any AI regulation. But even members of Congress bullied by the PACs and yelled at by AI CEOs would prefer for humanity to not end.
Our advantage: when both sides have a chance to present their arguments, our side wins.
We figure out how to explain this problem effectively through targeted advertising, automating persuasion on the threat, and idea diffusion modeling, then scale what works. We also provide strategic and comms support for allied organizations like CAIS and MIRI.
Our chatbot makes valid and rigorous arguments about why users should care about the threat that AI might literally kill everyone. People are convinced.
The Reddit benchmark for CTR is 0.2–0.6%. Most ads have more downvotes than upvotes. Our results are 10x+ better.
Our long-form post ads achieve:
These numbers are extraordinary. Judging by comments, we're convincing people.
Currently, no one receives a salary—even those working full-time. All funding goes to communications: ads, LLM inference, copies of "If Anyone Builds It" for influential people. The data shows our approach works: we are already convincing. Scaling what we have is a good idea; and we're excited about making it even more efficient.