Dedicated to improving our understanding of AI to mitigate its risks.

Our mission

AI systems will soon be integrated into large parts of the economy and our personal lives.

While this transformation may unlock substantial personal and societal benefits, there are also vast risks. We think some of the greatest risks stem from “scheming” AIs, i.e. advanced AI systems that covertly pursue misaligned objectives. Our goal is to understand and evaluate for the emergence of scheming well enough to prevent the possible harms that scheming AIs might cause.

About us

Apollo Research is focused on reducing risks from dangerous capabilities in advanced AI systems, especially scheming behaviors.

We design AI model evaluations and conduct technical research to better understand state-of-the-art AI models. Our governance team provides global policymakers with expert technical guidance.

 

 

What we do
Our research

Model Evaluations

We develop and run evaluations of frontier AI systems. Our expertise is in LM agent evaluations for scheming strategic deception, evaluation awareness and scheming. We also conduct fundamental research into the emergence of scheming and into potential mitigations.

Governance & Policy

We support governments and international organisations by developing technical AI governance regimes. Expertise in building a robust third-party evaluation ecosystem, effectively regulating frontier AI systems, and establishing standards and best practices.

Consultancy

Additionally, we provide consultancy services for building responsible AI development frameworks, designing research programs, ecosystem mapping and literature reviews, and more.

Our partners
Frontier labs, multinational companies, governments, and foundations partner with Apollo Research.
“One thing you might imagine is testing for deception for example, as a capability. You really don’t want that in the system because then you can’t rely on anything else it’s reporting. So that would be my number one emerging capability I think that would be good to test for.”
Demis Hassabis CEO Google DeepMind
Contact

For collaborations and other inquiries, please get in touch

Currently, we are looking for collaborators in the broader AI governance, policy, and strategy sphere, and for partnerships with leading AI developers for model evaluations.