The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.
- Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
- Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
- 401(k) retirement plan with employer match
- Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
- Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
- 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
- Mental health and wellness support
- Employer-paid basic life and disability coverage
- Annual learning and development stipend to fuel your professional growth
- Daily meals in our offices, and meal delivery credits as eligible
- Relocation support for eligible employees
- Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.
More details about our benefits are available to candidates during the hiring process.
This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.
About the Team
The Scaling team designs, builds, and operates critical infrastructure that enables research at OpenAI.
Our mission is simple: accelerate the progress of research towards AGI. We do this by building core systems that researchers rely on - ranging from low-level infrastructure components to research-facing custom applications. These systems must scale with the increasing complexity and size of our workloads, while remaining reliable and easy to use.
About the Role
As we grow, we’re looking for a pragmatic and versatile software engineer who thrives in fast-moving environments and enjoys building systems that empower others.
This is a generalist software engineering role with an emphasis on distributed systems, data processing infrastructure, and operational excellence. You’ll develop and operate foundational backend services that power key OpenAI’s research workflows - both by creating new infrastructure and by building on existing systems. The use cases will span across observability, analytics, performance engineering, and other domains, all with the goal of solving meaningful and impactful problems to research.
This role is based in San Francisco, CA or open to being remote within the US. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
- Design, build, and operate scalable backend systems that support various ML research workflows, including observability and analytics.
- Develop reliable infrastructure that supports both streaming and batch data processing at scale.
- Creating internal-facing tools and applications as needed.
- Debug and improve performance of services running on Kubernetes, including operational tooling and observability.
- Collaborate with engineers and researchers to deliver reliable systems that meet real-world needs in production.
- Help improve system reliability by participating in the on-call rotation and responding to critical incidents.
You might thrive in this role if you have:
- Strong proficiency in Python/Rust and backend software development, ideally in large codebases.
- Experience with distributed systems and scalable data processing infrastructure, including technologies like Kafka, Spark, Trino/Presto, Iceberg.
- Hands-on experience operating services in Kubernetes, with familiarity in tools like Terraform and Helm.
- Comfort working across the stack - from low-level infrastructure components to application logic - and making trade-offs to move quickly.
- A focus on building systems that are both technically sound and easy for others to use.
- Curiosity and adaptability in fast-changing environments, especially in high-growth orgs.