Senior Machine Learning Engineer - Hardware Abstractions & Performance Optimization

4 Months ago • All levels • Research Development • $220,000 PA - $300,000 PA

Job Summary

Job Description

Luma is building multimodal AI to expand human imagination and capabilities, focusing on vision to create more aware, capable, and useful systems. They are seeking engineers experienced in maintaining and designing highly efficient systems and code optimized for multiple hardware platforms. The role involves ensuring efficient implementation of models and systems with a focus on abstractions that scale beyond NVIDIA/CUDA hardware, identifying and remedying efficiency bottlenecks, and benchmarking products across various hardware and software to understand tradeoffs. The engineer will collaborate with partners and the research team on hardware integration and system efficiency.
Must have:
  • Experience optimizing Pytorch for memory, latency, and throughput.
  • Experience using torch.compile / torch.XLA.
  • Experience benchmarking and profiling GPU & CPU code in Pytorch for optimal device utilization.
  • Experience building tools & abstractions for optimal model performance on different hardware and software stacks.
  • Experience working with transformer models and attention implementations.
  • Experience with parallel inference, particularly tensor parallelism and pipeline parallelism.
Good to have:
  • Experience with high-performance Triton/CUDA and writing custom PyTorch kernels and ops.
  • Experience writing high-performance parallel C++.
  • Experience building inference/demo prototype code (incl. Gradio, Docker etc.).
Perks:
  • Offers Equity

Job Details

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.

We are looking for engineers with significant experience maintaining & designing highly efficient systems and code that can be optimized to run on multiple hardware platforms, bringing our state-of-the-art models to as many people at the best performance per dollar.

Responsibilities

  • Ensure efficient implementation of models & systems with a focus on designing, maintaining, and writing abstractions that scale beyond NVIDIA/CUDA hardware.

  • Identify and remedy efficiency bottlenecks (memory, speed, utilization, communication) by profiling and implementing high-performance PyTorch code, deferring to Triton or similar kernel-level languages as necessary.

  • Benchmarking our products across a variety of hardware & software to help the product team understand the optimal tradeoffs between latency, throughput and cost at various degrees of parallelism.

  • Work together with our partners to help them identify bottlenecks and push forward new iterations of hardware and software.

  • Work closely together with the rest of the research team to ensure systems are planned to be as efficient as possible from start to finish and raise potential issues for hardware integration.

Must have experience

  • Experience optimizing for memory, latency and throughput in Pytorch.

    • Bonus: experience with non-NVIDIA systems

  • Experience using torch.compile / torch.XLA.

  • Experience benchmarking and profiling GPU & CPU code in Pytorch for optimal device utilization (examples: torch profiler, memory profilers, trace viewers, custom tooling).

  • Experience building tools & abstractions to ensure models run optimally on different hardware and software stacks .

  • Experience working with transformer models and attention implementations.

  • Experience with parallel inference, particularly with tensor parallelism, pipeline parallelism.

Good to have experience

  • Experience with high-performance Triton/CUDA and writing custom PyTorch kernels and ops. Top candidates will be able to write fused kernels for common hot paths, understand when to make use of lower level features like tensor cores or warp intrinsics, and will understand where these tools can be most impactful.

  • Experience writing high-performance parallel C++. Bonus if done within an ML context with PyTorch, like for data loading, data processing, inference code

  • Experience building inference / demo prototype code (incl. Gradio, Docker etc.)

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in United States

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Research Development Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

Palo Alto, California, United States (Hybrid)

View All Jobs

Get notified when new jobs are added by Luma

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug