Software Engineer - Inference Infrastructure

5 Minutes ago • 2 Years + • $129,960 PA - $246,240 PA
Research Development

Job Description

The Inference Infrastructure team at ByteDance is building the next generation of cloud-native, GPU-optimized orchestration systems for large-scale LLM inference. This role involves designing and building highly performant, scalable, and cost-efficient container-based cluster management and orchestration systems, architecting cloud-native GPU and AI accelerator infrastructure, and collaborating on world-class inference solutions using various LLM engines. The engineer will contribute to open-source and help shape the future of AI inference infrastructure globally.
Good To Have:
  • Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray).
  • Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments.
  • Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM).
  • Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI).
  • Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms.
  • Excellent communication skills and ability to collaborate across global, cross-functional teams.
  • Passion for system efficiency, performance optimization, and open-source innovation.
Must Have:
  • Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience.
  • Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms.
  • Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines.
  • Stay current with the latest advances in open source (Kubernetes, Ray), AI/ML and LLM infrastructure, and systems research.
  • Write high-quality, production-ready code that is maintainable, testable, and scalable.
  • B.S./M.S. in Computer Science, Computer Engineering, or related fields with 2+ years of relevant experience.
  • Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems.
  • Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration.
  • Solid knowledge of container and orchestration technologies (Docker, Kubernetes).
  • Proficiency in at least one major programming language (Go, Rust, Python, or C++).
Perks:
  • Medical, dental, and vision insurance
  • 401(k) savings plan with company match
  • Paid parental leave
  • Short-term and long-term disability coverage
  • Life insurance
  • Wellbeing benefits
  • 10 paid holidays per year
  • 10 paid sick days per year
  • 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure)

Add these skills to join the top 1% applicants for this job

cross-functional
communication
data-analytics
cpp
game-texts
cuda
networking
aws
rust
azure
pytorch
docker
microservices
kubernetes
python
machine-learning

Responsibilities

About the Team The Inference Infrastructure team is the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are part of ByteDance’s Core Compute Infrastructure organization, responsible for designing and operating the platforms that power microservices, big data, distributed storage, machine learning training and inference, and edge computing across multi-cloud and global datacenters. With ByteDance’s rapidly growing businesses and a global fleet of machines running hundreds of millions of containers daily, we are building the next generation of cloud-native, GPU-optimized orchestration systems. Our mission is to deliver infrastructure that is highly performant, massively scalable, cost-efficient, and easy to use—enabling both internal and external developers to bring AI workloads from research to production at scale. We are expanding our focus on LLM inference infrastructure to support new AI workloads, and are looking for engineers passionate about cloud-native systems, scheduling, and GPU acceleration. You’ll work in a hyper-scale environment, collaborate with world-class engineers, contribute to the open-source community, and help shape the future of AI inference infrastructure globally. Responsibilities - Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience. - Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms. - Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines. - Stay current with the latest advances in open source (Kubernetes, Ray, etc.), AI/ML and LLM infrastructure, and systems research; integrate best practices into production systems. - Write high-quality, production-ready code that is maintainable, testable, and scalable.

Qualifications

Minimum Qualifications - B.S./M.S. in Computer Science, Computer Engineering, or related fields with 2+ years of relevant experience (Ph.D. with strong systems/ML publications also considered). - Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems. - Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration. - Solid knowledge of container and orchestration technologies (Docker, Kubernetes). - Proficiency in at least one major programming language (Go, Rust, Python, or C++). Preferred Qualifications - Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray). - Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments. - Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM). - Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI). - Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms. - Excellent communication skills and ability to collaborate across global, cross-functional teams. - Passion for system efficiency, performance optimization, and open-source innovation.

Job Information

【For Pay Transparency】Compensation Description (Annually)

The base salary range for this position in the selected city is $129960 - $246240 annually.​

Compensation may vary outside of this range depending on a number of factors, including a candidate’s qualifications, skills, competencies and experience, and location. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.​

Benefits may vary depending on the nature of employment and the country work location. Employees have day one access to medical, dental, and vision insurance, a 401(k) savings plan with company match, paid parental leave, short-term and long-term disability coverage, life insurance, wellbeing benefits, among others. Employees also receive 10 paid holidays per year, 10 paid sick days per year and 17 days of Paid Personal Time (prorated upon hire with increasing accruals by tenure).​

The Company reserves the right to modify or change these benefits programs at any time, with or without notice.​

For Los Angeles County (unincorporated) Candidates:​

Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state, and local laws including the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Our company believes that criminal history may have a direct, adverse and negative relationship on the following job duties, potentially resulting in the withdrawal of the conditional offer of employment:​

1. Interacting and occasionally having unsupervised contact with internal/external clients and/or colleagues;​

2. Appropriately handling and managing confidential information including proprietary and trade secret information and access to information technology systems; and​

3. Exercising sound judgment.​

Set alerts for more jobs like Software Engineer - Inference Infrastructure
Set alerts for new jobs by bytedance
Set alerts for new Research Development jobs in United States
Set alerts for new jobs in United States
Set alerts for Research Development (Remote) jobs

Contact Us
hello@outscal.com
Made in INDIA 💛💙