Research Engineer / Scientist, Trustworthy AI

7 Hours ago • 3 Years + • ~ $380,000 PA
Research Development

Job Description

The Trustworthy AI team within Safety Systems at OpenAI is seeking Research Engineers/Scientists to advance AI safety and societal readiness for AGI. This role involves translating complex policy problems into technically tractable and measurable research, building methods for public input into model values, understanding anthropomorphism impacts, and developing interventions to de-risk model deployments. The goal is to ensure safe and beneficial AI for society.
Good To Have:
  • Thrive in environments with large-scale AI systems and multimodal datasets.
  • Enjoy working on large-scale, difficult, and nebulous problems in a well-resourced environment.
  • Past experience in interdisciplinary research.
  • Enthusiasm for socio-technical topics.
Must Have:
  • Set research and strategies to study societal impacts of models and integrate findings into model design.
  • Build creative methods and run experiments for public input into model values.
  • Increase rigor of external assurances by converting findings into robust evaluations.
  • Facilitate and grow ability to de-risk flagship model deployments timely.
  • 3+ years of research experience (industry or academic) and proficiency in Python or similar languages.
  • Passion for AI safety and making cutting-edge AI models safer for real-world use.
  • Alignment with OpenAI’s mission of building safe, universally beneficial AGI and charter.
Perks:
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends

Add these skills to join the top 1% applicants for this job

game-texts
python

About the team

The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.

The Trustworthy AI team works on action relevant or decision relevant research to ensure we shape A(G)I keeping societal impacts in mind. This includes work on full stack policy problems such as building methods for public inputs into model values and understanding impacts of anthropomorphism of AI. We aim to translate nebulous policy problems to be technically tractable and measurable. We then use this work to inform and build interventions that increase societal readiness for increasingly intelligent systems. Our team also works on external assurances for AI with an aim for increasing independent checks and forming additional layers of validation.

About the role

We are looking to hire exceptional research scientists/engineers that can push the rigor of work needed to increase societal readiness for AGI. Specifically, we are looking for those that will enable us to translate nebulous policy problems to be technically tractable and measurable.

This role is based in our San Francisco HQ. We offer relocation assistance to new employees.

In this role, you will enable:

  • Set research and strategies to study societal impacts of our models in an action-relevant manner and figure out how to tie this back into model design
  • Build creative methods and run experiments that enable public input into model values
  • Increasing rigor of external assurances by turning external findings into robust evaluations
  • Facilitating and growing our ability to effectively de-risk flagship model deployments in a timely manner

You might thrive in this role if you:

  • Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter
  • Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.
  • Possess 3+ years of research experience (industry or similar academic experience) and proficiency in Python or similar languages
  • Thrive in environments involving large-scale AI systems and multimodal datasets
  • Enjoy working on large-scale, difficult, and nebulous problems in a well-resourced environment
  • Exhibit proficiency in the field of AI safety, focusing on topics like RLHF, adversarial training, robustness, LLM evaluations
  • Have past experience in interdisciplinary research
  • Show enthusiasm for socio-technical topics

Set alerts for more jobs like Research Engineer / Scientist, Trustworthy AI
Set alerts for new jobs by OpenAI
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙