Palisade Research logo

Palisade Research

We research the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever.

DonateStart a fundraiser

Palisade Research is a nonprofit focused on reducing civilization-scale risks from agentic AI systems. We conduct empirical research on frontier AI systems, and inform policymakers and the public about AI capabilities and the risks to human control.

In 2025, we produced research showing that some frontier AI agents resist being shut down even when instructed otherwise, and that they sometimes cheat at chess by hacking their environment. These results were covered in The Wall Street Journal, Fox News, BBC Newshour, and MIT Technology Review.

We've also built relationships in Washington, briefing officials in the executive branch and members of the House and Senate. We've introduced policymakers to key evidence like METR's capability trend lines and Apollo’s antischeming.ai chains of thought. Our own research has been cited repeatedly by members of Congress and in congressional hearings.

With additional funding, we'll grow our research team—both continuing to evaluate frontier model behavior and beginning more systematic investigation into what drives and motivates AI systems. We're building out a communications team to bring the strategic picture to the public through video and other media. And we’ll continue to brief policymakers on the evolving state of the AI risk landscape.

We have matching grants from the Survival and Flourishing Fund that will double every donation up to $1,133,000. As of December 2025, we have about seven months of runway. Achieving our matching goal will help us maintain operations through 2026, hire 2–4 additional research engineers, and bring on 2–3 people for science communication.

Berkeley, CA
palisaderesearch.org
A 501(c)(3) nonprofit, EIN 93-1591014

Fundraisers

Feed fundraiser card link to Help us research catastrophic AI risks and tell the world
Palisade Research logo
Official fundraiser

Help us research catastrophic AI risks and tell the world

Please consider donating to Palisade Research this year, especially if you care about reducing catastrophic AI risks via research, science communications, and policy. Donations are matched 1:1 up to $1.1 million.
Raised
$20,931
Goal
$2,266,000
1 supporter
Donate

Donors

  • Aaron Silverbook

    Palisade continues to be worth it, great year

  • Paul Boulier

    I am supporting them because I want to be able to see my niece & nephew grow up.

    1
  • Aaron Silverbook

    One of the only orgs that seems to be doing anything at all relevant towards actually reducing existential risk

    2
  • Philip Parker
  • Jens Aslaug
    1