The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit the society and is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Trustworthy AI team works on action relevant or decision relevant research to ensure we shape A(G)I keeping societal impacts in mind. This includes work on full stack policy problems such as building methods for public inputs into model values and understanding impacts of anthropomorphism of AI. We aim to translate nebulous policy problems to be technically tractable and measurable. We then use this work to inform and build interventions that increase societal readiness for increasingly intelligent systems. Our team also works on external assurances for AI with an aim for increasing independent checks and forming additional layers of validation.
We are looking to hire exceptional research scientists/engineers that can push the rigor of work needed to increase societal readiness for AGI. Specifically, we are looking for those that will enable us to translate nebulous policy problems to be technically tractable and measurable.
This role is based in our San Francisco HQ. We offer relocation assistance to new employees.