Software Engineer, Model Behavior

6 Hours ago • All levels • $255,000 PA - $405,000 PA
Software Development & Engineering

Job Description

The Model Behavior team at OpenAI shapes how models interact with people, aiming to create intuitive and magical user experiences. This role involves designing and building systems to understand user engagement, identify model shortcomings, and develop fundamental evaluations for model behavior. The engineer will focus on robust evaluations, developing tooling and dashboards, building interfaces for human raters, capturing online signals, and rapid prototyping with research and product teams. The position requires thriving in ambiguous environments and a deep care for measurement quality and user experience, directly impacting hundreds of millions of users globally.
Good To Have:
  • Experience building evaluations for capability and model improvements.
  • Experience building and maintaining observability tooling.
  • Enjoy owning 0→1 user-facing products or tools.
  • Ability to ship quickly under competing priorities and tight deadlines.
  • Understanding of how evaluations work and curiosity about model training and iteration.
  • Care about product polish and usability.
  • Effective collaboration across teams and willingness to take on diverse tasks.
  • Team player.
Must Have:
  • Design and build systems to understand user engagement with models.
  • Identify where models fall short.
  • Design and develop fundamental, launch-blocking evaluations for model behavior.
  • Stand up robust evaluations (automated, human, product metrics, query sets).
  • Develop tooling, dashboards, and visualizations to track model behavior improvements.
  • Build interfaces and pipelines for human raters, autoraters, and hybrid workflows.
  • Capture online signals (A/B tests, usage telemetry) and reconcile with offline metrics.
  • Rapidly prototype and iterate with research, product, and safety partners under tight timelines.
  • Build evaluation systems to measure core dimensions at scale.
  • Design pipelines for collecting and validating high-quality human data.
  • Develop and integrate autorater models into the eval loop.
  • Collaborate across research, safety, infrastructure, and product teams.
Perks:
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures
  • Paid sick or safe time
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend
  • Daily meals in our offices, and meal delivery credits
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends

Add these skills to join the top 1% applicants for this job

cross-functional
team-player
game-texts
cross-functional-collaboration
user-experience-ux
prototyping

About the Team

The Model Behavior team shapes how our models interact with people. We view the model as the product itself, aiming to create intuitive experiences that exceed user expectations and feel like magic.

The team partners closely with research and product teams across the company to improve the real-world usefulness of our models at scale. Our work directly impacts hundreds of millions of users globally and contributes to OpenAI's mission of broadly distributing safe AI.

About the Role

We are looking for an engineer with experience in evaluation systems, observability, and data pipelines. You will design and build systems to understand (1) how users engage with our models, (2) identify where our models fall short, and (3) design/develop fundamental, launch blocking, evals for model behavior. This includes:

  • Standing up robust evaluations (automated evals, human evals, product metrics, query sets).
  • Developing tooling, dashboards, and visualizations to measure and track model behavior improvements.
  • Building interfaces and pipelines for human raters, autoraters, and hybrid workflows.
  • Capturing online signals (A/B tests, usage telemetry) and reconciling them with offline metrics.
  • Prototyping fast and iterating with research, product, and safety partners under tight timelines.

This role spans evaluation design, data pipelines, and cross-functional collaboration. You should thrive in ambiguous, scrappy environments and care deeply about measurement quality and user experience.

This role is based in San Francisco, CA. We use a hybrid model (3 days in office/week) and offer relocation support.

In this role, you will:

  • Build evaluation systems to measure core dimensions at scale and identify new areas for improvement
  • Design pipelines for collecting and validating high-quality human data.
  • Build a robust data-flywheel to quickly launch evals on user signals
  • Develop robust evaluations to define and track improvements in model behavior
  • Rapidly prototype and develop tooling, dashboards, and visualizations for researchers and applied teams
  • Develop and integrate autorater models into the eval loop.
  • Prototype dashboards and interfaces that surface eval results to researchers and applied teams to support launch decisions.
  • Debug contradictions between offline and online metrics, and drive experiments to resolve them.
  • Collaborate across research, safety, infrastructure, and product teams to deliver solutions that improve model efficiency and user experience
  • Own and support experiments that validate hypotheses around model behavior

You might thrive in this role if you:

  • Have built evaluations for capability and model improvements
  • Have experience building and maintaining observability tooling
  • Enjoy owning 0→1 user-facing products or tools, ideally in a startup or fast-moving environment
  • Ship quickly under competing priorities and tight deadlines
  • Understand how evaluations work and are curious about model training and iteration
  • Care about product polish and usability
  • Collaborate effectively across teams and take on diverse tasks to move work forward
  • Are a team player, willing to do a variety of tasks that move the team forward

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Set alerts for more jobs like Software Engineer, Model Behavior
Set alerts for new jobs by OpenAI
Set alerts for new Software Development & Engineering jobs in United States
Set alerts for new jobs in United States
Set alerts for Software Development & Engineering (Remote) jobs
Contact Us
hello@outscal.com
Made in INDIA 💛💙