Senior Distributed Systems Engineer

1 Hour ago • 3 Years + • Research & Development

About the job

Summary

This Senior Distributed Systems Engineer role involves collaborating with researchers to build and optimize platforms for training next-generation foundation models on massive GPU clusters. Key responsibilities include scaling and optimizing systems for training large-scale models across thousands of GPUs, profiling and enhancing training code performance, developing efficient workload distribution systems, designing robust solutions for handling hardware failures, building diagnostic tools, optimizing inference workloads, implementing high-performance CUDA, Triton, and PyTorch code, and collaborating with researchers on system design. The ideal candidate will have extensive experience in ML pipelines, distributed systems, or high-performance computing, along with proficiency in Python and PyTorch, and expertise in CUDA/Triton programming and optimization techniques. Experience with generative models and prototype development is a plus.
Must have:
  • 3+ years experience in ML pipelines, distributed systems, or HPC
  • Experience training large models using Python and PyTorch
  • Expertise in optimizing and deploying inference workloads
  • Understanding of distributed systems and frameworks (DDP, FSDP, tensor parallelism)
  • High-performance parallel C++ and custom PyTorch kernels
  • CUDA and Triton optimization techniques
Good to have:
  • Experience with generative models (Transformers, Diffusion Models, GANs)
  • Prototype development (Gradio, Docker)
Perks:
  • Competitive equity packages (stock options)
  • Comprehensive benefits plan
Not hearing back from companies?
Unlock the secrets to a successful job application and accelerate your journey to your next opportunity.

We are seeking highly skilled engineers with expertise in machine learning, distributed systems, and high-performance computing to join our Research team. In this role, you will collaborate closely with researchers to build and optimize platforms that train next-generation foundation models on massive GPU clusters. Your work will play a critical role in advancing the efficiency and scalability of cutting-edge generative AI technologies.

Key Responsibilities

  • Scale and optimize systems for training large-scale models across multi-thousand GPU clusters.
  • Profile and enhance the performance of training codebases to achieve best-in-class hardware efficiency.
  • Develop systems to distribute workloads efficiently across massive GPU clusters.
  • Design and implement robust solutions to enable model training in the presence of hardware failures.
  • Build tools to diagnose issues, visualize processes, and evaluate datasets at scale.
  • Optimize and deploy inference workloads for throughput and latency across the entire stack, including data processing, model inference, and parallel processing.
  • Implement and improve high-performance CUDA, Triton, and PyTorch code to address efficiency bottlenecks in memory, speed, and utilization.
  • Collaborate with researchers to ensure systems are designed with optimal efficiency from the ground up.
  • Prototype cutting-edge applications using multimodal generative AI.

Qualifications

  • Experience:
    • 3+ years of professional experience in ML pipelines, distributed systems, or high-performance computing.
    • Hands-on experience training large models using Python and PyTorch, with familiarity in the full pipeline: data processing, loading, training, and inference.
    • Proven expertise in optimizing and deploying inference workloads, with experience in profiling GPU/CPU code (e.g., Nvidia Nsight).
    • Deep understanding of distributed systems and frameworks, such as DDP, FSDP, and tensor parallelism.
    • Strong experience writing high-performance parallel C++ and custom PyTorch kernels, with knowledge of CUDA and Triton optimization techniques.
    • Bonus: Experience with generative models (e.g., Transformers, Diffusion Models, GANs) and prototype development (e.g., Gradio, Docker).
  • Technical Skills:
    • Proficiency in Python, with significant experience using PyTorch.
    • Advanced skills in CUDA/Triton programming, including custom kernel development and tensor core optimization.
    • Strong generalist software engineering skills and familiarity with distributed and parallel computing systems.

Note: This position is not intended for recent graduates.

Compensation

The salary range for this role in California is $175,000–$250,000 per year. Actual compensation will depend on job-related knowledge, skills, experience, and candidate location. We also offer competitive equity packages in the form of stock options and a comprehensive benefits plan.

View Full Job Description
$175.0K - $250.0K/yr (Outscal est.)
$212.5K/yr avg.
Palo Alto, California, United States

About The Company

An idea-to-video platform that brings your creativity to motion.

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

California, United States (On-Site)

View All Jobs

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug