Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

NVIDIA

Job Summary

NVIDIA Dynamo is a high-throughput, low-latency inference framework for serving generative AI and reasoning models across multi-node distributed environments. This Principal Software Engineer role involves defining the vision and roadmap for memory management of large-scale LLM and storage systems. Key responsibilities include designing a unified memory layer, architecting integrations with LLM serving engines for KV-cache optimization, and co-designing interfaces for efficient data sharing. The role requires deep expertise in distributed systems, memory hierarchies, and performance optimization, leveraging GPU and networking technologies.

Must Have

  • Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote file/object/cloud storage.
  • Architect and implement deep integrations with leading LLM serving engines (vLLM, SGLang, TensorRT-LLM).
  • Focus on KV-cache offload, reuse, and remote sharing across heterogeneous and disaggregated clusters.
  • Co-design interfaces and protocols that enable disaggregated prefill, peer-to-peer KV-cache sharing, and multi-tier KV-cache storage.
  • Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies.
  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python.
  • Deep understanding of memory hierarchies and experience designing systems that span multiple tiers for performance and cost efficiency.
  • Experience with distributed caching or key-value systems optimized for low latency and high concurrency.
  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies.
  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network.
  • Excellent communication skills and prior experience leading cross-functional efforts.

Good to Have

  • Prior contributions to open-source LLM serving or systems projects focused on KV-cache optimization, compression, streaming, or reuse.
  • Experience designing unified memory or storage layers that expose a single logical KV or object model across GPU, host, SSD, and cloud tiers.
  • Publications or patents in areas such as LLM systems, memory-disaggregated architectures, RDMA/NVLink-based data planes, or KV-cache/CDN-like systems for ML.

Perks & Benefits

  • Highly competitive salaries
  • Comprehensive benefits package
  • Equity

Job Description

NVIDIA Dynamo is a high-throughput, low-latency inference framework for serving generative AI and reasoning models across multi-node distributed environments. Built in Rust for performance and Python for extensibility, Dynamo orchestrates GPU shards, routes requests, and manages shared KV cache across heterogeneous clusters so that many accelerators feel like a single system at datacenter scale. As large language models rapidly outgrow the memory and compute budget of any single GPU, this platform enables efficient, resilient deployment of cutting-edge LLM workloads.

We are seeking a Principal Systems Engineer to define the vision and roadmap for memory management of large-scale LLM and storage systems.

What you'll be doing:

  • Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote file/object/cloud storage to support large-scale LLM inference.
  • Architect and implement deep integrations with leading LLM serving engines (such as vLLM, SGLang, TensorRT-LLM), with a focus on KV-cache offload, reuse, and remote sharing across heterogeneous and disaggregated clusters.
  • Co-design interfaces and protocols that enable disaggregated prefill, peer-to-peer KV-cache sharing, and multi-tier KV-cache storage (GPU, CPU, local disk, and remote memory) for high-throughput, low-latency inference.
  • Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies for low-latency KV-cache access and sharing across heterogeneous accelerators and memory pools.
  • Mentor senior and junior engineers, set technical direction for memory and storage subsystems, and represent the team in internal reviews and external forums (open source, conferences, and customer-facing technical deep dives).

What we need to see:

  • Masters or PhD or equivalent experience
  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python, with a track record of delivering production services.
  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage) and experience designing systems that span multiple tiers for performance and cost efficiency.
  • Distributed caching or key-value systems, especially designs optimized for low latency and high concurrency.
  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies, and familiarity with concepts like disaggregated and aggregated deployments for AI clusters.
  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network, using metrics to drive architectural decisions and validate improvements in TTFT and throughput.
  • Excellent communication skills and prior experience leading cross-functional efforts with research, product, and customer teams.

Ways to stand out from the crowd:

  • Prior contributions to open-source LLM serving or systems projects focused on KV-cache optimization, compression, streaming, or reuse.
  • Experience designing unified memory or storage layers that expose a single logical KV or object model across GPU, host, SSD, and cloud tiers, especially in enterprise or hyperscale environments.
  • Publications or patents in areas such as LLM systems, memory-disaggregated architectures, RDMA/NVLink-based data planes, or KV-cache/CDN-like systems for ML.

With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our special engineering teams are growing fast. If you're a creative and autonomous engineer with a genuine passion for technology, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 272,000 USD - 425,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until December 26, 2025.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

8 Skills Required For This Role

Cross Functional Communication Budget Management Cpp Game Texts Networking Rust Python

Similar Jobs