NIM Solution Architect

NVIDIA

Job Summary

NVIDIA is a leading AI computing company. The Solution Architect will drive the implementation and deployment of NVIDIA Inference Microservice (NIM) solutions, optimize large models, create AI workflows, and support customers in advanced AI solutions. This role involves packaging models into containers, designing agentic AI, delivering technical projects, providing customer support, and collaborating with cross-functional teams to expand NVIDIA's AI solutions portfolio.

Must Have

  • Drive implementation and deployment of NVIDIA Inference Microservice (NIM) solutions.
  • Package optimized models (LLM, VLM, Retriever, CV, OCR) into containers using NIM Factory Pipeline.
  • Design and implement agentic AI tailored to customer business scenarios using NIMs.
  • Deliver technical projects, demos, and client support tasks as directed.
  • Provide technical support and guidance to customers on NVIDIA technologies and products.
  • Collaborate with cross-functional teams to enhance and expand AI solutions portfolio.
  • 3+ years working experience with Bachelor's or Master's degree in Computer Science, AI, or related field.
  • Proven experience in deploying and optimizing large language models.
  • Proficiency in at least one inference framework (e.g., TensorRT, ONNX Runtime, PyTorch).
  • Strong programming skills in Python or C++.
  • Familiarity with mainstream inference engines (e.g., vLLM, SGLang).
  • Experience with DevOps/MLOps such as Docker, Git, and CI/CD practices.
  • Excellent problem-solving skills and ability to troubleshoot complex technical issues.
  • Demonstrated ability to collaborate effectively across diverse, global teams.

Good to Have

  • Experience in architectural design for field LLM projects.
  • Expertise in model optimization techniques, particularly using TensorRT.
  • Knowledge of AI workflow design and implementation.
  • Experience on cluster resource management tools.
  • Familiarity with agile development methodologies.
  • CUDA optimization experience.
  • Extensive experience designing and deploying large scale HPC and enterprise computing systems.

Job Description

NIM Solution Architect

NVIDIA is leading company of AI computing. At NVIDIA, our employees are passionate about AI, HPC , VISUAL, GAMING. Our Solution Architect team is more focusing to bring NVIDIA new technology into difference industries. We help to design the architecture of AI computing platform, analysis the AI and HPC applications to deliver our value to customers. This role will be instrumental in leveraging NVIDIA's cutting-edge technologies to optimize open-source and proprietary large models, create AI workflows, and support our customers in implementing advanced AI solutions.

What you’ll be doing:

  • Drive the implementation and deployment of NVIDIA Inference Microservice (NIM) solutions
  • Use NVIDIA NIM Factory Pipeline to package optimized models (including LLM, VLM, Retriever, CV, OCR, etc.) into containers providing standardized API access for on-prem or cloud deployment
  • Refine NIM tools for the community, help the community to build their performant NIMs
  • Design and implement agentic AI tailored to customer business scenarios using NIMs
  • Deliver technical projects, demos and client support tasks as directed by the Solution Architecture Leadership
  • Provide technical support and guidance to customers, facilitating the adoption and implementation of NVIDIA technologies and products
  • Collaborate with cross-functional teams to enhance and expand our AI solutions portfolio
  • Be an internal champion for NVIDIA software and total solutions in technical community
  • Be an industry thought leader on integrating NVIDIA technology especially inference services into LHA, business partners and whole community
  • Assist in supporting NVAIE team and driving NVAIE business in China

What we need to see:

  • 3+ years working experience with Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field
  • Proven experience in deploying and optimizing large language models
  • Proficiency in at least one inference framework (e.g., TensorRT, ONNX Runtime, PyTorch)
  • Strong programming skills in Python or C++
  • Familiarity with main stream inference engines (e.g., vLLM, SGLang)
  • Experience with DevOps/MLOps such as Docker, Git, and CI/CD practices
  • Excellent problem-solving skills and ability to troubleshoot complex technical issues
  • Demonstrated ability to collaborate effectively across diverse, global teams, adapting communication styles while maintaining clear, constructive professional interactions

Ways to stand out from the crowd:

  • Experience in architectural design for field LLM projects
  • Expertise in model optimization techniques, particularly using TensorRT
  • Knowledge of AI workflow design and implementation, experience on cluster resource management tools. Familiarity with agile development methodologies
  • CUDA optimization experience, extensive experience designing and deploying large scale HPC and enterprise computing systems

11 Skills Required For This Role

Cross Functional Github Cpp Game Texts Agile Development Cuda Pytorch Ci Cd Docker Git Python

Similar Jobs