Research Engineer / Scientist, Interpretability

7 Hours ago • 2 Years + • $310,000 PA - $460,000 PA
Research Development

Job Description

The Interpretability team at OpenAI studies internal representations of deep learning models to understand model behavior and engineer more understandable representations. This role involves developing and publishing research in mechanistic interpretability, engineering infrastructure for studying model internals at scale, and collaborating across teams. The goal is to ensure the safety of powerful AI systems and make a significant impact on building and deploying safe AGI.
Good To Have:
  • Thrive in environments involving large-scale AI systems.
  • Are deeply curious.
Must Have:
  • Develop and publish research on techniques for understanding deep networks.
  • Engineer infrastructure for studying model internals at scale.
  • Collaborate across teams on unique OpenAI projects.
  • Guide research directions toward usefulness and scalability.
  • Excited about OpenAI’s mission and aligned with its charter.
  • Show enthusiasm for long-term AI safety.
  • Bring experience in AI safety, mechanistic interpretability, or related disciplines.
  • Hold a Ph.D. or have research experience in computer science, machine learning.
  • Possess 2+ years of research engineering experience and proficiency in Python.
Perks:
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends

Add these skills to join the top 1% applicants for this job

game-texts
deep-learning
asana
python
machine-learning

About the Team

The Interpretability team studies internal representations of deep learning models. We are interested in using representations to understand model behavior, and in engineering models to have more understandable representations. We are particularly interested in applying our understanding to ensure the safety of powerful AI systems. Our working style is collaborative and curiosity-driven.

About the Role

OpenAI is seeking a researcher passionate about understanding deep networks, with a strong background in engineering, quantitative reasoning, and the research process. You will develop and carry out a research plan in mechanistic interpretability, in close collaboration with a highly motivated team. You will play a critical role in helping ensure future models remain safe even as they grow in capability. This will make a significant impact on our goal of building and deploying safe AGI.

In this role, you will:

  • Develop and publish research on techniques for understanding representations of deep networks.
  • Engineer infrastructure for studying model internals at scale.
  • Collaborate across teams to work on projects that are uniquely suited to pursue.
  • Guide research directions toward demonstrable usefulness and/or long-term scalability.

You might thrive in this role if you:

  • Are excited about OpenAI’s mission of ensuring AGI benefits all of humanity, and are aligned with OpenAI’s charter.
  • Show enthusiasm for long-term AI safety, and have thought deeply about technical paths to safe AGI.
  • Bring experience in the field of AI safety, mechanistic interpretability, or spiritually related disciplines.
  • Hold a Ph.D. or have research experience in computer science, machine learning, or a related field.
  • Thrive in environments involving large-scale AI systems, and are excited to make use of unique resources in this area.
  • Possess 2+ years of research engineering experience and proficiency in Python or similar languages.
  • Are deeply curious.

About

An AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see Affirmative Action and Equal Employment Opportunity Policy Statement.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

Global Applicant Privacy Policy

At, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Compensation Range: $310K - $460K

Apply for this Job

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Set alerts for more jobs like Research Engineer / Scientist, Interpretability
Set alerts for new jobs by OpenAI
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙