About

Our Mission

Our Mission

AI systems will soon be deeply integrated into the global economy and everyday life. While this brings personal and societal benefits, it also introduces serious risks. We believe some of the most dangerous risks come from scheming frontier AI: advanced AI systems covertly pursuing misaligned objectives.

 

We believe that robustly mitigating scheming risks will require a deep scientific understanding of the emergence of scheming. Our goal is to secure frontier AI systems from development, to deployment and governance.

Our Journey
2023

First movers on frontier risks

 

We identified scheming AI as a serious risk early and built our research around it. We:

  • presented at the inaugral AI Safety Summit in the UK.
  • shared policy recommendations with the UK’s Frontier AI Taskforce.
  • established ourselves as leaders in AI evaluations.

Our work was covered by the BBC, Bloomberg, and presented before the US Senate.

2024

Detecting scheming at the frontier

 

Scheming moves from theory to evidence. We:

  • partnered with OpenAI to test their o1 model before public deployment
  • published the first evidence that frontier models can scheme in context
  • engaged with policymakers at the EU AI Office, UN Advisory Body and US Congress.

Our findings shifted the field and were covered by TIME and MIT Technology Review. Our research entered into the US Congressional record.

2025

Shaping research agendas

 

We moved from identifying risks to shaping how the world addresses them. We ran evaluations for all major labs as well as:

  • partnered with OpenAI to study anti-scheming interventions on frontier models
  • published the first analysis of internal AI deployment risks
  • created a loss of control playbook to help governments prepare for advanced AI threats.

Our work was featured in NY Times, Nature and The Economist.

2026

Scaling for what’s ahead

We are now a Public Benefit Corporation to achieve our mission of safe and secure AI development and deployment. We are:

  • building a science of scheming to predict and prevent scheming risks
  • advising governments and developing policy recommendations on scheming AI
  • building AGI safety tools to monitor and secure frontier AI agents.

Join our team

Our team
Filter
  • All
  • Leadership
  • Science
  • Governance
  • Products
  • Operations
  • Directors & Advisors
Marius Hobbhahn
CEO / CO- FOUNDER
Chris Akin
COO
Dr. Charlotte Stix
HEAD OF AI GOVERNANCE
Alexander Meinke
Member of Technical Staff
Jérémy Scheurer
Member of Technical Staff
Rusheb Shah
Member of Technical Staff
Matteo Pistillo
SENIOR AI GOVERNANCE RESEARCHER
Alejandro Ortega
POLICY RESEARCHER
Bronson Schoen
Member of Technical Staff
Axel Højmark
Member of Technical Staff
Andrei Matveiakin
Member of Technical Staff
Felix Hofstätter
Member of Technical Staff
Joping Chai
PEOPLE & OPERATIONS MANAGER
Annika Hallensleben
AI POLICY RESEARCHER
Alex Lloyd
Member of Technical Staff
Teun van der Weij
Member of Technical Staff
Alex Kedryk
Member of Technical Staff
Glen Rodgers
Member of Technical Staff
Jeremy Neiman
Member of Technical Staff
Umar Akhtar
Senior Finance Manager
Operations Generalist
Jasvin Kaur
OPERATIONS GENERALIST
Mia Hopman
Member of Technical Staff
Zak Walters
Member of Technical Staff
Theodore Ehrenborg
Member of Technical Staff
Kyle Dai
Member of Technical Staff
Herbert Tanujaya
Member of Technical Staff
Dylan Bowman
Member of Technical Staff
Jannes Elstner
Member of Technical Staff
Victor Gillioz
Member of Technical Staff
Ezra Newman
Member of Technical Staff
Srdjan Miletic
Member of Technical Staff
Zen van Riel
Member of Technical Staff
Daniel Kokotajlo
Board of Directors
David Duvenaud
Board of Advisors
Yan-David Erlich
Board of Advisors
Owain Evans
Board of Advisors

Interested in joining our team?

See our current open positions on our careers page.

Our Blog
View all
  • Uncategorized

Apollo Research is becoming a PBC

Apollo is spinning off from our fiscal sponsor into a Public Benefit Corporation (PBC). We think this is the best way for us to achieve our mission of reducing extreme risks from frontier AI systems.

20/01/2026
Read more
  • Uncategorized

Our Norms on Security, Science Communication and Conflicts of Interest

We outline Apollo Research’s norms on security, science communication, and conflicts of interest, detailing how we maintain scientific integrity, manage sensitive information, and communicate our work responsibly.

26/11/2025
Read more
  • Uncategorized

Apollo 18-Month Update

Apollo Research is now 18 months old. You can read our latest update here.

13/12/2024
Read more
All posts
Filter
  • All
Uncategorized
  • Uncategorized

Apollo Is Adopting Inspect

13/11/2024
Read more
Uncategorized
  • Uncategorized

The First Year Of Apollo Research

29/05/2024
Read more
Uncategorized
  • Uncategorized

Theories of Change for AI Auditing

13/11/2023
Read more
Uncategorized
  • Uncategorized

Security at Apollo Research

26/07/2023
Read more
Uncategorized
  • Uncategorized

Announcing Apollo Research

29/05/2023
Read more