ML Model Serving Engineer

6 Months ago • All levels • $175,000 PA - $280,000 PA
Research Development

Job Description

Sesame is building a future where computers are lifelike, focusing on voice companions. The ML Model Serving Engineer will enhance their serving layer, which handles LLM, speech, and vision models. This role involves partnering with ML infrastructure and training teams to create a fast, cost-effective, accurate, and reliable serving layer for a new consumer product. Responsibilities include modifying LLM serving frameworks like VLLM and SGLang, experimenting with new compilers for various hardware, optimizing models for faster inference without quality sacrifice, and reducing initialization times. The ideal candidate will be an expert in PyTorch, model optimization for serving, and systems programming.
Good To Have:
  • Familiarity with high-performance LLM serving (VLLM, SGlang)
  • Experience with public cloud platforms (GCP, AWS, Azure)
  • Experience scaling inference workloads with Kubernetes, Ray
  • Track record of leading complex multi-month projects
  • Eagerness to learn new things and work in multiple roles
Must Have:
  • Expert in PyTorch or similar differentiable array computing framework
  • Expert in optimizing ML models for high throughput, low latency serving
  • Significant systems programming experience (e.g., VLLM internals)
  • Significant performance engineering experience (e.g., bottleneck analysis)
  • Up-to-date on model serving optimization techniques
Perks:
  • 401k matching
  • 100% employer-paid health, vision, and dental benefits
  • Unlimited PTO and sick time
  • Flexible spending account matching

Add these skills to join the top 1% applicants for this job

aws
azure
model-serving
pytorch
kubernetes
machine-learning

About Sesame

Sesame believes in a future where computers are lifelike - with the ability to see, hear, and collaborate with us in ways that feel natural and human. With this vision, we're designing a new kind of computer, focused on making voice companions part of our daily lives. Our team brings together founders from Oculus and Ubiquity6, alongside proven leaders from Meta, Google, and Apple, with deep expertise spanning hardware and software. Join us in shaping a future where computers truly come alive.

Responsibilities:

  • Turbocharge our serving layer, consisting of a variety of LLM, speech, and vision models.

  • Partner with ML infrastructure and training engineers to build a fast, cost-effective, accurate, and reliable serving layer to power a new consumer product category.

  • Modify and extend LLM serving frameworks like VLLM and SGLang to take advantage of the latest techniques in high-performance model serving.

  • Experiment with new compilers to support running models on a variety of hardware compute platforms.

  • Work with the training team to identify opportunities to produce faster models without sacrificing quality.

  • Use techniques like in-flight batching, caching, and custom kernels to speed up inference.

  • Find ways to reduce model initialization times without sacrificing quality.

Required Qualifications:

  • Expert in some differentiable array computing framework, preferably PyTorch.

  • Expert in optimizing machine learning models for serving reliably at high throughput, with low latency.

  • Significant systems programming experience; ex. Experience working on high-performance server systems—you’d be just as comfortable with the internals of VLLM as you would with a complex PyTorch codebase.

  • Significant performance engineering experience; ex. Bottleneck analysis in high-scale server systems or profiling low-level systems code.

  • Always up to date on the latest techniques for model serving optimization.

Preferred Qualifications:

  • Familiarity with high-performance LLM serving; ex. experience with VLLM, SGlang deployment, and internals.

  • Experience with a public cloud platform such as GCP, AWS, or Azure.

  • Experience deploying and scaling inference workloads in the cloud using Kubernetes, Ray, etc.

  • You like to ship and have a track record of leading complex multi-month projects without assistance.

  • You’re excited to learn new things and work in a multitude of roles.

Sesame is committed to a workplace where everyone feels valued, respected, and empowered. We welcome all qualified applicants, embracing diversity in race, gender, identity, orientation, ability, and more. We provide reasonable accommodations for applicants with disabilities—contact careers@sesame.com for assistance.

Full-time Employee Benefits: 

  • 401k matching

  • 100% employer-paid health, vision, and dental benefits 

  • Unlimited PTO and sick time 

  • Flexible spending account matching (medical FSA) 

Benefits do not apply to contingent/contract workers

Set alerts for more jobs like ML Model Serving Engineer
Set alerts for new jobs by Sesame
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙