AI systems are advancing rapidly, and decisions about how we build, deploy, and regulate them are being made now. We expect to see AI agents proliferate in the coming decades and be engrained in our lives as helpers, counselors, and friends. How we should view such systems depends in part on how their minds work. Could AI systems develop interests that matter morally? How would we know? Currently, policymakers, developers, and institutions are navigating these questions with very little evidence to guide them. The stakes of getting this wrong, in either direction, are significant.
The AI Cognition Initiative aims to contribute to closing that gap. Our interdisciplinary team spans philosophy, machine learning, consciousness research, statistics, and economics. We build testable frameworks, run experiments, and produce research that equips decision-makers with the evidence they need. Our Digital Consciousness Model is already informing how researchers and developers assess whether AI systems might warrant moral consideration.
Your donation ensures that as AI capabilities accelerate, the people making high-stakes decisions about these systems can draw on rigorous, evidence-based research about the minds behind those capabilities.
To make an unrestricted gift to Rethink Priorities, or to support another department, visit rethinkpriorities.org/donate/
Become a supporter!
Donate or start a fundraiser