Principal Engineer, AI Inference Reliability

20 Minutes ago • 7 Years +
Research Development

Job Description

The Principal Engineer, AI Inference Reliability will act as a hands-on Reliability Tech Lead, owning the mission to make Cerebras Inference the most reliable AI service globally. This role involves defining SLOs, designing and implementing reliability mechanisms for fault detection, graceful degradation, failover, throttling, and recovery across multi-region deployments and wafer-scale systems. The individual will lead incident management, architect for reliability and observability, develop reliability tooling, and collaborate with various engineering teams to embed reliability into every layer of the inference service. They will also monitor metrics and mentor engineers on best practices for large-scale, high-reliability distributed systems.
Good To Have:
  • Prior experience building large-scale AI infrastructure systems.
Must Have:
  • Define and drive reliability strategy and SLOs.
  • Design and implement fault detection, failover, and recovery systems.
  • Lead large-scale incident management and root-cause analysis.
  • Architect systems for redundancy, durability, and debuggability.
  • Develop reliability tools for chaos testing and fault injection.
  • Collaborate with software, infrastructure, and hardware teams.
  • Monitor and communicate service health metrics.
  • Mentor engineers on reliable system design and operations.
  • Bachelor's or master's degree in Computer Science.
  • Strong programming skills in Python, C++, Go, or Rust.
  • Expertise in SLO/SLI/SLA design, incident response, postmortem culture.
Perks:
  • Build a breakthrough AI platform beyond GPU constraints.
  • Publish and open source cutting-edge AI research.
  • Work on one of the fastest AI supercomputers.
  • Enjoy job stability with startup vitality.
  • Simple, non-corporate work culture respecting individual beliefs.

Add these skills to join the top 1% applicants for this job

team-management
cross-functional
communication
cpp
game-texts
incident-response
rust
python
system-design
machine-learning

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

In late 2024, we launched Cerebras Inference, the fastest Generative AI inference service in the world, over 10 times faster than GPU-based hyperscale cloud inference. Since launch, we’ve scaled to meet the surging demand from AI labs, enterprises, and a thriving developer community.

In October 2025, we announced our series G funding, raising $1.1 billion USD to accelerate the expansion of our products and services to meet global AI demand.

About the team

The Cerebras Inference team’s mission is to deliver the world’s most performant, secure, and reliable enterprise-grade AI service. We build and operate large-scale distributed systems that power AI inference at unprecedented speed and efficiency. Join us to help scale inference and accelerate AI.

About the role

We’re looking for a hands-on Reliability Tech Lead (IC) to own the mission of making Cerebras Inference the most reliable AI service in the world. You will drive reliability strategy and execution across our inference stack, from client SDKs and public-cloud multi-region deployments to wafer-scale systems in specialized data centers.

In this role, you will define SLOs and incident-response frameworks, design and implement reliability mechanisms at scale, and partner across hundreds of engineers to ensure our service meets world-class reliability standards.

If you are passionate about building and operating massive-scale, low-latency, high-reliability distributed systems, we want to hear from you.

Responsibilities:

  • Define and drive reliability strategy: establish SLOs and ensure alignment across engineering.
  • Design and implement reliability mechanisms: build and evolve systems for fault detection, graceful degradation, failover, throttling, and recovery across multiple regions and data centers.
  • Lead large-scale incident management: own postmortems, root-cause analysis, and prevention loops for reliability-related incidents.
  • Architect for reliability and observability: influence system design for redundancy, durability, and debuggability.
  • Develop reliability tooling: create internal tools and frameworks for chaos testing, load simulation, and distributed fault injection.
  • Collaborate broadly: work across software, infrastructure, and hardware teams to ensure reliability is embedded into every layer of our inference service.
  • Monitor and communicate reliability metrics: build dashboards and alerts that measure service health and provide actionable insights.
  • Mentor and influence: guide engineers and set best practices for designing, testing, and operating reliable large-scale systems.

Skills & Qualifications:

  • Bachelor's or master's degree in computer science or related field.
  • 7+ years of experience in backend, infrastructure, or reliability engineering for large-scale distributed systems.
  • Strong programming skills in at least one popular backend programming language such as Python, C++, Go, or Rust.
  • Deep and hard-earned experience of reliability principles: SLO/SLI/SLA design, incident response, and postmortem culture.
  • Excellent communication and cross-functional leadership skills.
  • Bonus: prior experience building large-scale AI infrastructure systems.

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

1. Build a breakthrough AI platform beyond the constraints of the GPU.

2. Publish and open source their cutting-edge AI research.

3. Work on one of the fastest AI supercomputers in the world.

4. Enjoy job stability with startup vitality.

5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!

  • * *

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

  • * *

This website or its third-party tools process personal data. For more details, click here

to review our CCPA disclosure notice._

Set alerts for more jobs like Principal Engineer, AI Inference Reliability
Set alerts for new jobs by Cerebras Systems
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙