Meta is seeking a Research Scientist to join our Research & Development teams. The ideal candidate will have industry experience working on AI Infrastructure related topics. The position will involve taking these skills and applying them to solve for some of the most crucial & exciting problems that exist in the hardware/software space for AI Training. We are hiring in multiple locations and across different teams:
The Model/System Co-Design team works on (1) optimizing the parallelisms, compute efficiency, training paradigms to improve the scalability and reliability of large scale distributed training systems; (2) innovating and co-designing noval model architecture for sustained scaling and hardware efficiency; (3) co-designing the learning algorithm to improve the efficiency and robustness of training convergence. We have succesfully landed a number of step function changes to both LLM pre-training and ranking/recommendation model co-design, and continue to focus on bleeding edge exploration to achieve industry-leading scale and efficiency.
The MTIA Training Performance team is dedicated to maximizing training performance of Generative AI and recommendation models on Meta's custom accelerators. We model and project the performance of current and future training workloads on custom hardware while it is being designed to provide early, crucial feedback to the architecture, compiler, and kernels teams. We employ cutting-edge optimization and data parallelization strategies to maximize training throughput for the next generations of LLMs and deep recommendation models, and we work cross-functionally with many partner teams to assure the end-to-end performance of large-scale training in order to more quickly deliver the next generation of Generative AI experiences to our users.
The Collectives and Communication team within AI Co-design helps drive the development, optimization and tuning of Collective Communications libraries for Nvidia GPUs, MTIA accelerators and AMD GPUs covering both AI training and inference use cases. The comms team works to optimize communications performance at scale and investigate improvements to algorithms, tooling, and interfaces that can impact Meta workloads. We actively work in multiple HPC collective communication libraries and collaborate with teams across Meta and externally.Research Scientist, Systems ML and HPC - SW/HW Co-Design ResponsibilitiesApply High-Performance Computing (HPC) algorithms and techniques to optimize large-scale AI workloadsAnalyze, benchmark, and optimize large-scale workloads on next-generation training superclustersApply relevant AI infrastructure and software/hardware acceleration techniques to build and optimize our intelligent ML systems that improve Meta’s products and experiencesInfluence next-generation model and hardware architecture choices by projecting training performance and model efficiencyGoal-setting related to project impact, AI system design, and infrastructure/developer efficiencyDirectly or influencing partners to deliver impact through deep, thorough data-driven analysisDrive large projects across multiple teamsDefine use cases and develop methodology and benchmarks to evaluate different approachesApply in depth knowledge of how ML infra interacts with the other systems around itExperience in systems software development such as collective Communications