About the Team The Inference Infrastructure team is the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are part of ByteDance’s Core Compute Infrastructure organization, responsible for designing and operating the platforms that power microservices, big data, distributed storage, machine learning training and inference, and edge computing across multi-cloud and global datacenters. With ByteDance’s rapidly growing businesses and a global fleet of machines running hundreds of millions of containers daily, we are building the next generation of cloud-native, GPU-optimized orchestration systems. Our mission is to deliver infrastructure that is highly performant, massively scalable, cost-efficient, and easy to use—enabling both internal and external developers to bring AI workloads from research to production at scale. We are expanding our focus on LLM inference infrastructure to support new AI workloads, and are looking for engineers passionate about cloud-native systems, scheduling, and GPU acceleration. You’ll work in a hyper-scale environment, collaborate with world-class engineers, contribute to the open-source community, and help shape the future of AI inference infrastructure globally. Responsibilities - Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience. - Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms. - Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines. - Stay current with the latest advances in open source (Kubernetes, Ray, etc.), AI/ML and LLM infrastructure, and systems research; integrate best practices into production systems. - Write high-quality, production-ready code that is maintainable, testable, and scalable.
Minimum Qualifications - B.S./M.S. in Computer Science, Computer Engineering, or related fields with 2+ years of relevant experience (Ph.D. with strong systems/ML publications also considered). - Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems. - Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration. - Solid knowledge of container and orchestration technologies (Docker, Kubernetes). - Proficiency in at least one major programming language (Go, Rust, Python, or C++). Preferred Qualifications - Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray). - Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments. - Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM). - Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI). - Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms. - Excellent communication skills and ability to collaborate across global, cross-functional teams. - Passion for system efficiency, performance optimization, and open-source innovation.