Research Engineer - Training Efficiency

6 Months ago • All levels • $220,000 PA - $300,000 PA
Research Development

Job Description

Luma aims to create multimodal AI to enhance human imagination and capabilities, focusing on vision as the next frontier beyond language models. The company is developing and scaling multimodal foundation models for systems that can perceive, understand, display, explain, and interact with the world. This role involves working with a research team to build and train cutting-edge foundation models on thousands of GPUs, emphasizing efficient implementation for large-scale training and distributed systems. The engineer will identify and implement optimization techniques, remedy efficiency bottlenecks in PyTorch code, and collaborate to ensure system efficiency from conception to completion. The work also includes conducting research and experiments on large-scale generative AI models to improve training and inference latency and throughput.
Good To Have:
  • Experience with Triton/CUDA and custom PyTorch kernels.
  • Experience writing high-performance parallel C++.
  • Experience building inference/demo prototype code.
Must Have:
  • Experience training large models with Python & PyTorch.
  • Experience profiling GPU & CPU code in PyTorch.
  • Experience writing parallel & distributed PyTorch code.
  • Experience with transformer models and attention.

Add these skills to join the top 1% applicants for this job

cpp
cuda
pytorch
docker
python
nvidia-nsight

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems. You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.

Responsibilities

  • Ensure efficient implementation of models & systems with a focus on large-scale training.

  • Identify and implement optimization techniques for massively parallel and distributed systems, including the underlying communication layer.

  • Identify and remedy efficiency bottlenecks (memory, speed, utilization, communication) by profiling and implementing high-performance PyTorch code, deferring to Triton, CUDA, and lower levels as necessary.

  • Work closely together with the rest of the research team to ensure systems are planned to be as efficient as possible from start to finish.

  • Conduct research & experiments on state-of-the-art large-scale generative AI models with the goal to improve latency & throughput for training and inference.

Must have experience

  • Experience training large models using Python & Pytorch, including practical experience working with the full development pipeline from data processing, preparation & dataloading to training and inference.

  • Experience profiling GPU & CPU code in Pytorch for optimal device utilization (examples: torch profiler, NVIDIA Nsight systems/compute, memory profilers, trace viewers, custom tooling).

  • Experience writing & improving highly parallel & distributed Pytorch code of large generative models, with familiarity in FSDP, Tensor Parallel, Sequence/Context Parallel, Pipeline Parallel etc.

  • Experience working with transformer models and attention implementations.

    Good to have experience

  • Experience with high-performance Triton/CUDA and writing custom PyTorch kernels and ops. Top candidates will be able to write fused kernels for common hot paths, understand when to make use of lower level features like tensor cores or warp intrinsics, and will understand where these tools can be most impactful.

  • Experience writing high-performance parallel C++. Bonus if done within an ML context with Pytorch, like for data loading, data processing, inference code.

  • Experience building inference / demo prototype code (incl. Gradio, Docker etc.).

Set alerts for more jobs like Research Engineer - Training Efficiency
Set alerts for new jobs by Luma
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙