Machine Learning Performance Engineer

2 Hours ago • All levels

Job Summary

Job Description

The Machine Learning Performance Engineer will optimize the performance of models for both training and inference. This includes improving CUDA, storage systems, networking, and GPU-level considerations. The role involves debugging training run performance, ensuring platform efficiency, and questioning current approaches. The team collaborates on model training, system architecture, and trading strategies, diving into market data, tuning hyperparameters, and debugging distributed training performance. The candidate will work with researchers, engineers, and traders to solve challenging problems related to extreme latency constraints, large datasets, and complex feedback loops.
Must have:
  • Understanding of modern ML techniques and toolsets
  • Experience and systems knowledge to debug training runs
  • Low-level GPU knowledge of PTX, SASS, etc.
  • Debugging and optimisation experience using tools
  • Library knowledge of Triton, CUTLASS, etc.
  • Intuition about latency and throughput characteristics
  • Background in Infiniband, RoCE, etc.
  • Understanding of collective algorithms
  • Inventive approach and willingness to question

Job Details

About the Position

We’re looking for smart and curious individuals from academia to join our growing team and drive our ML work.

On our Machine Learning team, you'll build the deep learning models that power our trading strategies, supported by our rapidly growing computing cluster with thousands of H100s/200s. Trading poses unusual challenges—extreme latency constraints, large datasets, complex feedback loops and a high level of noise—that force us to search for novel tricks. 

Researchers, engineers and traders sit a few feet away from each other and work together to train models, architect systems and run trading strategies. Depending on the day, we might be diving deep into market data, tuning hyperparameters, debugging distributed training performance or studying how our model likes to trade in production.

You’ll be focused on optimising the performance of our models—both training and inference. We care about efficient large-scale training, low-latency inference in real-time systems and high-throughput inference in research. Part of this is improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking and host- and GPU-level considerations. Zooming in, we also want to ensure our platform makes sense even at the lowest level—is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long?

About You

If you’ve never thought about a career in finance, you’re in good company. Many of us were in the same position before working here. If you have a curious mind and a passion for solving interesting problems, we have a feeling you’ll fit right in. There’s no fixed set of skills we are looking for, but you should have:

  • An understanding of modern ML techniques and toolsets
  • The experience and systems knowledge required to debug a training run’s performance end-to-end
  • Low-level GPU knowledge of PTX, SASS, warps, cooperative groups, Tensor Cores and the memory hierarchy
  • Debugging and optimisation experience using tools like CUDA GDB, Nsight Systems, Nsight Computesight-systems and nsight-compute
  • Library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN and cuBLAS
  • Intuition about the latency and throughput characteristics of CUDA graph launch, tensor core arithmetic, warp-level synchronisation and asynchronous memory loads
  • Background in Infiniband, RoCE, GPUDirect, PXN, rail optimisation and NVLink, and how to use these networking technologies to link up GPU clusters
  • An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
  • An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools
  • Fluent in English

If you're a recruiting agency and want to partner with us, please reach out to agency-partnerships@janestreet.com.

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in London, England, United Kingdom

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Category Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

Jane Street is a quantitative trading firm with offices in New York, London, Hong Kong, Singapore, and Amsterdam. We are always recruiting top candidates and we invest heavily in teaching and training. The environment at Jane Street is open, informal, intellectual, and fun. People grow into long careers here because there are always new and interesting problems to solve, systems to build, and theories to test.



London, England, United Kingdom (On-Site)

Hong Kong, Hong Kong (On-Site)

New York, New York, United States (On-Site)

New York, New York, United States (On-Site)

London, England, United Kingdom (On-Site)

New York, New York, United States (On-Site)

New York, New York, United States (On-Site)

New York, New York, United States (On-Site)

New York, New York, United States (On-Site)

Chicago, Illinois, United States (On-Site)

View All Jobs

Get notified when new jobs are added by Jane Street

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug