Center for AI Safety logo

Center for AI Safety

1
DonateStart a fundraiser
The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence (AI) has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
San Francisco, CA
safe.ai
A 501(c)(3) nonprofit, EIN 88-1751310

Fundraisers

Feed fundraiser card link to Prevent Artificial Life

Prevent Artificial Life

The world's leading AI researchers – including Turing Award winners Geoffrey Hinton and Yoshua Bengio – have signed a stark warning: artificial superintelligence could cause human extinction within the next decade. The same companies racing to build ASI admit they cannot control it. Current AI systems already exhibit unexpected behaviors, manipulate humans, and operate beyond our understanding. Without immediate action, we're conducting an experiment that risks everything humanity has built.
Raised
$100
Next milestone
$150
Donate

Become a supporter!

Donate or start a fundraiser