26/11/2025

Our Norms on Security, Science Communication and Conflicts of Interest

At Apollo Research, we ground our research in concrete policies, and day-to-day practices that aim to preserve our mission and scientific integrity. We encourage rigor and clarity across our workstream, supported by clear conflict of interest policies and strong publishing and security norms. In this blog post, we describe some of our norms in more detail.


Our Conflict of Interest Policy

We believe in and support the development of a thriving and functional evaluation ecosystem, one that ambitiously models itself after larger, well-established industries. As part of that, we advocate that evaluators should be fairly compensated for their work. Until the ecosystem has reached an ideal state where other mechanisms may be promising, we generally request that the parties for whom we conduct work compensate us at a fair market value for the work. We share some details of our internal conflict of interest policy below.

  1. We do not accept work where compensation is contingent upon the outcome of our work.
  2. We do not accept miscellaneous grants or investments from organizations for which we run, or expect to run, evaluations outside of those contracts.
  3. We recuse any individual who is considered to have a financial interest in an organization we are evaluating from that specific evaluation.
  4. We do not accept work that would result in a misrepresentation of our research or research results.


Our Publishing and Security Norms

In accordance with our mission, our goal at Apollo Research is to create research and tools that reduce or prevent the development of AI systems that could pose catastrophic risks to humanity. This entails being careful about what we share and with whom. It is plausible that our research may, at certain times, pose some risks if it becomes accessible to certain parties. We therefore aim to mitigate these risks through various norms, security processes, and publishing principles, some of which we describe below.

  1. We follow the principle of least privilege, i.e., we aim to keep sensitive information to as few people as is practical.
  2. We maintain comprehensive technical security controls, including, but not limited to, strong authentication with role-based access control, endpoint protection, physical security, email protection, and logging.
  3. We carefully consider the plausible benefits and risks before running any experiment and only conduct a given experiment if the benefits clearly outweigh the harms.
  4. We do not and never will actively train AI models for catastrophically dangerous behavior.
  5. We engage in differential publishing, i.e., we disseminate information with varying degrees of openness, depending on the project’s security needs and our assessment of the benefits of sharing the work versus the risks. By default, all new projects are assigned the highest threshold, and the burden of proof is on reducing the privacy level of each project. Similarly, we redact sensitive information from our publications.

Our  information security strategy is aligned with ISO/IEC 27001, SOC 2, and applicable privacy and data protection regulations. We are progressing towards formal certification in ISO/IEC 27001/42001 and SOC 2 Type II. Our previous security policy blog post is accessible here.

For third parties that we work with who have IP or sensitive information concerns, please contact us for details of our security programme.

Our Science Communication Norms

We recognize that the effective and responsible communication of science is paramount to building and maintaining public and institutional trust. Given the complex and high-stakes nature of AI safety research, it is essential that our work is disseminated in a clear, measured, and rigorous manner to all relevant stakeholders. We strive to communicate our research and its findings truthfully and accurately to government, industry, and the broader public, ensuring all proposals are justifiable and grounded in strong scientific evidence. 

We implement the following norms to ensure rigor and care in our communication:

  1. We commit to communicating our research and its findings truthfully, directly, and accurately in all engagements with government, industry, and the public.
  2. We prioritize scientific rigor and focus on presenting justifiable, grounded policy proposals derived from our work.
  3. We foster a culture of collaboration and peer review, actively inviting diverse insights to build collective expertise and forgo avoidable blind spots.
  4. We engage closely with relevant governmental bodies and agencies and support efforts informed by the state of the art, its potential, and limitations.
  5. We commit ourselves to taking actions that, all else being equal, would prevent the erosion of the field, for example, by preventing a ‘race to the bottom’ in evaluation quality.