Senior Researcher — Safety Systems, Misalignment Research

7 Hours ago • 4 Years + • $380,000 PA - $460,000 PA
Research Development

Job Description

Safety Systems ensures safe AGI deployment. This role is for a Senior Researcher in the Misalignment Research team, focusing on identifying, quantifying, and understanding future AGI misalignment risks. You will design and execute cutting-edge attacks, build adversarial evaluations, and advance understanding of safety measure failures to influence OpenAI’s product launches and long-term safety roadmap.
Good To Have:
  • Ph.D., master’s degree, or equivalent experience in computer science, machine learning, security, or a related discipline.
Must Have:
  • Design and implement worst-case demonstrations for AGI alignment risks.
  • Develop adversarial and system-level evaluations.
  • Create automated tools and infrastructure for red-teaming and stress testing.
  • Conduct research on failure modes of alignment techniques and propose improvements.
  • Publish influential internal or external papers that shift safety strategy or industry practice.
  • Partner with engineering, research, policy, and legal teams to integrate findings into product safeguards and governance processes.
  • Mentor engineers and researchers, fostering a culture of rigorous, impact-oriented safety work.
  • 4+ years of experience in AI red-teaming, security research, adversarial ML, or related safety fields.
  • Strong research track record demonstrating creativity in uncovering and exploiting system weaknesses.
  • Fluent in modern ML / AI techniques and comfortable hacking on large-scale codebases and evaluation infrastructure.
  • Communicate clearly with both technical and non-technical audiences, translating complex findings into actionable recommendations.
  • Enjoy collaboration and can drive cross-functional projects that span research, engineering, and policy.
Perks:
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents)
  • Paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge
  • Paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends

Add these skills to join the top 1% applicants for this job

game-texts
asana
machine-learning

About the Team

Safety Systems sits at the forefront of OpenAI’s mission to build and deploy safe AGI, ensuring our most capable models can be released responsibly and for the benefit of society. Within Safety Systems, we are building a misalignment research team to focus on the most pressing problems for the future of AGI. Our mandate is to identify, quantify, and understand future AGI misalignment risks far in advance of when they can pose harm.

The work of this research taskforce spans four pillars:

1. Worst‑Case Demonstrations – Craft compelling, reality‑anchored demos that reveal how AI systems can go wrong. We focus especially on high importance cases where misaligned AGI could pursue goals at odds with human well being.

2. Adversarial & Frontier Safety Evaluations – Transform those demos into rigorous, repeatable evaluations that measure dangerous capabilities and residual risks. Topics of interest include deceptive behavior, scheming, reward hacking, deception in reasoning, and power-seeking, along with other related areas.

3. System‑Level Stress Testing – Build automated infrastructure to probe entire product stacks, assessing end‑to‑end robustness under extreme conditions. We treat misalignment as an evolving adversary, escalating tests until we find breaking points even as systems continue to improve.

4. Alignment Stress‑Testing Research – Investigate why mitigations break, publishing insights that shape strategy and next‑generation safeguards. We collaborate with other labs when useful and actively share misalignment findings to accelerate collective progress.

About the Role

We are seeking a Senior Researcher who is passionate about red‑teaming and AI safety. In this role you will design and execute cutting‑edge attacks, build adversarial evaluations, and advance our understanding of how safety measures can fail—and how to fix them. Your insights will directly influence OpenAI’s product launches and long-term safety roadmap.

In this role, you will

  • Design and implement worst‑case demonstrations that make AGI alignment risks concrete for stakeholders, focused on high stakes use cases described above.
  • Develop adversarial and system‑level evaluations grounded in those demonstrations, driving adoption across OpenAI.
  • Create automated tools and infrastructure to scale automated red‑teaming and stress testing.
  • Conduct research on failure modes of alignment techniques and propose improvements.
  • Publish influential internal or external papers that shift safety strategy or industry practice. We aim to concretely reduce existential AI risk.
  • Partner with engineering, research, policy, and legal teams to integrate findings into product safeguards and governance processes.
  • Mentor engineers and researchers, fostering a culture of rigorous, impact‑oriented safety work.

You might thrive in this role if you

  • Already are thinking about these problems night and day, and share our mission to build safe, universally beneficial AGI and align with the OpenAI Charter.
  • Have 4+ years of experience in AI red‑teaming, security research, adversarial ML, or related safety fields.
  • Possess a strong research track record—publications, open‑source projects, or high‑impact internal work—demonstrating creativity in uncovering and exploiting system weaknesses.
  • Are fluent in modern ML / AI techniques and comfortable hacking on large‑scale codebases and evaluation infrastructure.
  • Communicate clearly with both technical and non‑technical audiences, translating complex findings into actionable recommendations.
  • Enjoy collaboration and can drive cross‑functional projects that span research, engineering, and policy.
  • Hold a Ph.D., master’s degree, or equivalent experience in computer science, machine learning, security, or a related discipline (nice to have but not required).

What we Offer

  • A chance to shape safety practices at the frontier of AGI. Your work will directly lower the changes of catastrophic misalignment.
  • Access to cutting‑edge models, tooling, and compute resources.
  • A highly collaborative, mission‑driven environment with world‑class colleagues.
  • Competitive compensation, equity, and benefits.

If you’re excited to push AI systems to—and beyond—their limits so we can deploy them safely, we’d love to hear from you! Join us in the taking on the most important challenge for the world today.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement

.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form

. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link

.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Set alerts for more jobs like Senior Researcher — Safety Systems, Misalignment Research
Set alerts for new jobs by OpenAI
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙