Research Scientist / Engineer – Performance Optimization

Luma

Job Summary

The Performance Optimization team at Luma is dedicated to maximizing the efficiency and performance of our AI models. This group works closely with research and engineering teams to ensure cutting-edge multimodal models are trained efficiently and deployed at scale while maintaining the highest quality standards.

Must Have

  • Profile and optimize GPU/CPU/Accelerator code for maximum utilization and minimal latency.
  • Write high-performance PyTorch, Triton, CUDA, and custom PyTorch operations.
  • Develop fused kernels and leverage tensor cores and modern hardware features.
  • Optimize model architectures for distributed multi-node production deployment.
  • Build performance monitoring and analysis tools and automation.
  • Research and implement cutting-edge optimization techniques for transformer models.
  • Expert-level proficiency in Triton/CUDA programming and GPU optimization.
  • Strong PyTorch skills and experience with kernel development.
  • Proficiency with profiling tools like NVIDIA Nsight and torch profiler.
  • Deep understanding of transformer architectures and attention mechanisms.

Good to Have

  • Experience with compilers/exporters such as torch.compile, TensorRT, ONNX, XLA.
  • Experience optimizing inference workloads for latency and throughput.
  • Experience with Triton compiler and kernel fusion techniques.
  • Knowledge of warp-level intrinsics and advanced CUDA optimization.

Job Description

About the Role

The Performance Optimization team at Luma is dedicated to maximizing the efficiency and performance of our AI models. Working closely with both research and engineering teams, this group ensures that our cutting-edge multimodal models can be trained efficiently and deployed at scale while maintaining the highest quality standards.

Responsibilities

  • Profile and optimize GPU/CPU/Accelerator code for maximum utilization and minimal latency
  • Write high-performance PyTorch, Triton, CUDA, deferring to custom PyTorch operations if necessary
  • Develop fused kernels and leverage tensor cores and modern hardware features for optimal hardware utilization on different hardware platforms
  • Optimize model architectures and implementations for distributed multi-node production deployment
  • Build performance monitoring and analysis tools and automation
  • Research and implement cutting-edge optimization techniques for transformer model

Experience

  • Expert-level proficiency in Triton/CUDA programming and GPU optimization
  • Strong PyTorch skills
  • Experience with PyTorch kernel development and custom operations
  • Proficiency with profiling tools (NVIDIA Nsight, torch profiler, custom tooling)
  • Deep understanding of transformer architectures and attention mechanisms
  • (Preferred) Experience with compilers/exporters such as torch.compile, TensorRT, ONNX, XLA
  • (Preferred) Experience optimizing inference workloads for latency and throughput
  • (Preferred) Experience with Triton compiler and kernel fusion techniques
  • (Preferred) Knowledge of warp-level intrinsics and advanced CUDA optimization

Compensation

------------

The base pay range for this role is $187,500 – $395,000 per year.

4 Skills Required For This Role

Game Texts Cuda Pytorch Nvidia Nsight

Similar Jobs