Research Scientist / Engineer – Training Infrastructure

Luma

Job Summary

The Training Infrastructure team at Luma is responsible for building and maintaining distributed systems that enable training of large-scale multimodal models across thousands of GPUs. This role involves designing, implementing, and optimizing efficient distributed training systems, researching advanced parallelization techniques, and building monitoring and debugging tools. The goal is to ensure reliable, efficient, and scalable training infrastructure that pushes the boundaries of AI model development, focusing on training stability, convergence, and resource utilization.

Must Have

  • Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
  • Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
  • Build monitoring, visualization, and debugging tools for large-scale training runs
  • Optimize training stability, convergence, and resource utilization across massive clusters
  • Extensive experience with distributed PyTorch training and parallelisms in foundation model training
  • Deep understanding of GPU clusters, networking, and storage systems
  • Familiarity with communication libraries (NCCL, MPI) and distributed system optimization

Good to Have

  • Strong Linux systems administration and scripting capabilities
  • Experience managing training runs across >100 GPUs
  • Experience with containerization, orchestration, and cloud infrastructure

Job Description

About the Role

The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what's possible in AI model development.

Responsibilities

  • Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
  • Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
  • Build monitoring, visualization, and debugging tools for large-scale training runs
  • Optimize training stability, convergence, and resource utilization across massive clusters

Experience

  • Extensive experience with distributed PyTorch training and parallelisms in foundation model training
  • Deep understanding of GPU clusters, networking, and storage systems
  • Familiarity with communication libraries (NCCL, MPI) and distributed system optimization
  • (Preferred) Strong Linux systems administration and scripting capabilities
  • (Preferred) Experience managing training runs across >100 GPUs
  • (Preferred) Experience with containerization, orchestration, and cloud infrastructure

5 Skills Required For This Role

Problem Solving Game Texts Networking Linux Pytorch

Similar Jobs