Double your donation for AI Safety Camp
- Raised
- $20,000
- Goal
- $110,000
By all accounts they are the gold standard for this type of thing. Everyone says they are great, I am generally a fan of the format, I buy that this can punch way above its weight or cost. If I was going to back [a talent funnel], I’d start here.
— Zvi Mowshowitz (Nov 2025)
Why Support AISC Now:
1:1 Matching Opportunity: Every dollar you donate is matched by the Survival and Flourishing Fund through March 31, 2026 (up to $110k). SFF will contact you via email to confirm eligibility.
Track record of enabling careers in AI Safety: AI Safety Camp has kept enabling participants to produce novel research, get jobs and start new organizations in AI Safety. We've been at this for 10 editions, hosting researchers outside the traditional Bay Area and London hubs, and thus expanding the field's geographic and demographic reach.
For an up-to-date description of program outputs, please see here. See also last edition's outputs here.
We're seeing more and more early career professionals wanting to test their fit in AI Safety. Our remote program is great at getting new talent into the field. Each edition connects safety researchers and advocates with talented newcomers from various backgrounds (e.g. law, hardware design) and places (e.g. India). This diverse talent then gets to try work on research or policy directions that are often neglected. This contributes to the epistemic diversity needed in AI Safety, a field that is still deconfusing its foundational questions.
Your Donation Powers:
We're carrying forward $25k from AISC11, bringing our baseline operational needs for AISC12 to $125k (rather than our typical $150k).
Tier 1 ($0-125k): Basic operational costs for AISC12, covering organizer salaries and project support, including up to $1k per project in tools and compute.
Tier 2 ($125k-275k): Runway that allows us to plan longer term, especially valuable for research agendas repeatedly hosted at AISC. If we hit this goal, we will seek to hire an additional organizer.
Looking Forward: We intend to further develop our program to include more rigorous evaluation of research assumptions during the application process. This helps ensure participants not only learn by doing but also develop stronger research intuitions. Your donation helps us equip more upcoming AI Safety researchers with both research experience and deeper clarity around what kind of contribution matters.
Contact Us:
- If you want to talk about AI Safety research at the program, feel free to email Robert: robert@aisafety.camp
- If you want to talk more about the Stop/Pause AI projects, feel free to email Remmelt: remmelt@aisafety.camp