Deep Learning Engineer, Datacenters

6 Months ago • 3 Years +
Research Development

Job Description

NVIDIA's Deep Learning Engineer in Datacenters will help develop software infrastructure to analyze deep learning applications, evolve cost-efficient datacenter architectures for LLMs, and work with experts to develop analysis and profiling tools in Python, bash, and C++. Responsibilities involve analyzing system and software characteristics of DL applications, developing analysis tools, and measuring key performance metrics to estimate efficiency improvements. The role requires collaboration with various teams across NVIDIA, from research to silicon architecture. The ideal candidate will have experience with system software, GPU kernels, or DL frameworks and a strong understanding of system architecture and performance.
Good To Have:
  • CUDA, PyTorch, TensorFlow
  • Containerization (Docker), Slurm
  • Performance monitoring tools (perf, gprof)
  • Performance modeling (CPU, GPU, Memory, Network)
  • Multi-site/functional team experience
Must Have:
  • Bachelor's degree in EE/CS (Master's/PhD preferred)
  • 3+ years relevant experience
  • System software/Silicon architecture experience
  • C/C++ and Python programming
  • Deep Learning application analysis

Add these skills to join the top 1% applicants for this job

performance-analysis
cpp
game-texts
cuda
networking
linux
pytorch
deep-learning
computer-vision
docker
python
bash
tensorflow
css

As NVIDIA makes inroads into the Datacenter business, our team plays a central role in getting the most out of our exponentially growing datacenter deployments as well as establishing a data-driven approach to hardware design and system software development. We collaborate with a broad cross section of teams at NVIDIA ranging from DL research teams to CUDA Kernel and DL Framework development teams, to Silicon Architecture Teams. As our team grows, and as we seek to identify and take advantage of long term opportunities, our skillset needs are expanding as well.

Do you want to influence the development of high-performance Datacenters designed for the future of AI? Do you have an interest in system architecture and performance? In this role you will find how CPU, GPU, networking, and IO relate to deep learning (DL) architectures for Natural Language Processing, Computer Vision, Autonomous Driving and other technologies. Come join our team, and bring your interests to help us optimize our next generation systems and Deep Learning Software Stack.

What you'll be doing:

  • Help develop software infrastructure to characterize and analyze a broad range Deep Learning applications
  • Evolve cost-efficient datacenter architectures tailored to meet the needs of Large Language Models (LLMs).
  • Work with experts to help develop analysis and profiling tools in Python, bash and C++ to measure key performance metrics of DL workloads running on Nvidia systems.
  • Analyze system and software characteristics of DL applications.
  • Develop analysis tools and methodologies to measure key performance metrics and to estimate potential for efficiency improvement.

What we need to see:

  • A Bachelor’s degree in Electrical Engineering or Computer Science with 3 years or more of relevant experience (Masters or PhD degree preferred)
  • Experience in at least one of the following:
    • System Software: Operating Systems (Linux), Compilers, GPU kernels (CUDA), DL Frameworks (PyTorch, TensorFlow).
    • Silicon Architecture and Performance Modeling/Analysis: CPU, GPU, Memory or Network Architecture
  • Experience programming in C/C++ and Python. Exposure to Containerization Platforms (docker) and Datacenter Workload Managers (slurm) is a plus
  • Demonstrated ability to work in virtual environments, and a strong drive to own tasks from beginning to end. Prior experience with such environments will make you stand out.

Ways to stand out from the crowd:

  • Background with system software, Operating system intrinsics, GPU kernels (CUDA), or DL Frameworks (PyTorch, TensorFlow).

  • Experience with silicon performance monitoring or profiling tools (e.g. perf, gprof, nvidia-smi, dcgm).

  • In depth performance modeling experience in any one of CPU, GPU, Memory or Network Architecture

  • Exposure to Containerization Platforms (docker) and Datacenter Workload Managers (slurm).

  • Prior experience with multi-site teams or multi-functional teams.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!

#LI-Hybrid

Set alerts for more jobs like Deep Learning Engineer, Datacenters
Set alerts for new jobs by NVIDIA
Set alerts for new Research Development jobs in India
Set alerts for new jobs in India
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙