This role will support the fleet infrastructure team. The fleet team focuses on running the world’s largest, most reliable, and frictionless GPU fleet to support general purpose model training and deployment. Work on this team ranges from
- Maximizing GPUs doing useful work by building user-friendly scheduling and quota systems
- Running a reliable and low maintenance platform by building push-button automation for kubernetes cluster provisioning and upgrades
- Supporting research workflows with service frameworks and deployment systems
- Ensuring fast model startup times though high performance snapshot delivery across blob storage down to hardware caching
- Much more!
About the Role
As an engineer within Fleet infrastructure, you will design, write, deploy, and operate infrastructure systems for model deployment and training on one of the world’s largest GPU fleet. The scale is immense, the timelines are tight, and the organization is moving fast; this is an opportunity to shape a critical system in support of OpenAI's mission to advance AI capabilities responsibly.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will:
- Design, implement and operate components of our compute fleet including job scheduling, cluster management, snapshot delivery, and CI/CD systems.
- Interface with researchers and product teams to understand workload requirements
- Collaborate with hardware, infrastructure, and business teams to provide a high utilization and high reliability service
You might thrive in this role if you:
- Have experience with hyperscale compute systems
- Possess strong programming skills
- Have experience working in public clouds (especially Azure)
- Have experience working in Kubernetes
- Execution focused mentality paired with a rigorous focus on user requirements
- As a bonus, have an understanding of AI/ML workloads