Research Scientist / Engineer – Training Infrastructure

3 Minutes ago • All levels • Devops • $220,000 PA - $300,000 PA

Job Summary

Job Description

Luma AI is seeking a Research Scientist / Engineer for their Training Infrastructure team. This role involves building and maintaining distributed systems for large-scale multimodal model training across thousands of GPUs. The successful candidate will design, implement, and optimize efficient distributed training systems, research advanced parallelization techniques, and develop tools for monitoring and debugging. The goal is to ensure reliable, efficient, and scalable infrastructure for cutting-edge AI model development.
Must have:
  • Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs.
  • Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel).
  • Build monitoring, visualization, and debugging tools for large-scale training runs.
  • Optimize training stability, convergence, and resource utilization across massive clusters.
  • Extensive experience with distributed PyTorch training and parallelisms in foundation model training.
  • Deep understanding of GPU clusters, networking, and storage systems.
  • Familiarity with communication libraries (NCCL, MPI) and distributed system optimization.
Good to have:
  • Strong Linux systems administration and scripting capabilities.
  • Experience managing training runs across >100 GPUs.
  • Experience with containerization, orchestration, and cloud infrastructure.
Perks:
  • Competitive equity packages in the form of stock options
  • Comprehensive benefits plan

Job Details

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems. You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.

About the Role

The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what's possible in AI model development.

Responsibilities

  • Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
  • Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
  • Build monitoring, visualization, and debugging tools for large-scale training runs
  • Optimize training stability, convergence, and resource utilization across massive clusters

Experience

  • Extensive experience with distributed PyTorch training and parallelisms in foundation model training
  • Deep understanding of GPU clusters, networking, and storage systems
  • Familiarity with communication libraries (NCCL, MPI) and distributed system optimization
  • (Preferred) Strong Linux systems administration and scripting capabilities
  • (Preferred) Experience managing training runs across >100 GPUs
  • (Preferred) Experience with containerization, orchestration, and cloud infrastructure

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Palo Alto, California, United States

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Devops Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

Palo Alto, California, United States (Hybrid)

United States (Remote)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

View All Jobs

Get notified when new jobs are added by Luma

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug