The Center for AI Safety (CAIS — pronounced 'case') is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence (AI) has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Fundraisers
Prevent Artificial Life
The world's leading AI researchers – including Turing Award winners Geoffrey Hinton and Yoshua Bengio – have signed a stark warning: artificial superintelligence could cause human extinction within the next decade. The same companies racing to build ASI admit they cannot control it. Current AI systems already exhibit unexpected behaviors, manipulate humans, and operate beyond our understanding. Without immediate action, we're conducting an experiment that risks everything humanity has built.
- Raised
- $100
- Next milestone
- $150
Become a supporter!
Donate or start a fundraiser