Reducing risks from scheming frontier AI

About Us
Read more

Our Mission

As AI capabilities increase, some of the greatest risks will come from “scheming” AI, advanced systems that covertly pursue misaligned objectives.

Our goal is to understand and evaluate the emergence of scheming to prevent the possible harms that scheming AI might cause and build tools to make the deployment of powerful AI systems safer.

Science

We conduct fundamental research into the science of scheming, how it emerges and how to detect and mitigate it. We run pre-deployment evaluations of frontier AI systems to detect strategic deception, evaluation awareness and misaligned behaviour.

Governance & Policy

We support governments and international organisations by developing technical AI governance regimes. We enable effective regulation of frontier AI systems and establish standards and best practices.

Products

We are building AGI safety tools to monitor and secure frontier AI agents. Our first product, Watcher, is an automated oversight layer that detects failure modes in real time.

Our Products

Watcher

AI agent monitoring tool by Apollo Research

 

Watcher catches dangerous coding‑agent behavior before it becomes an incident. Watcher works with Tailscale Aperture to detect:

  • Insecure code execution
  • Data exfiltration
  • Agent manipulation
  • Emergent risks
Our Partners

Contact Us

For collaborations and other inquiries, please get in touch

We’re interested in partnering with other organizations. If you have a lot of AI agent logs and would like to analyze them automatically at scale, please reach out.