SummaryBy Outscal
Scale seeks a Manager, Machine Learning Research Engineer specializing in Generative AI. You'll lead a team of researchers and engineers developing cutting-edge LLMs and generative models. Must have experience with LLMs, deep learning, and production-level model training, especially using RLHF.
Scale's Generative AI Data Engine powers the most advanced LLMs and generative models in the world through world-class RLHF/RLAIF, data generation, model evaluation, safety, and alignment.
As the Manager of the Generative AI team, you will be responsible for managing and leading a group of talented researchers and engineers. Your primary focus will be to leverage your expertise in LLMs, generative models, and other foundational models to create and execute an AI roadmap which will help Scale accelerate our customers' Generative AI initiatives forward. This is an exciting opportunity to work on cutting-edge technologies and collaborate with industry-leading professionals.
We are building a large hybrid human-machine system in service of ML pipelines for dozens of industry-leading customers. We currently complete millions of tasks a month and will grow to complete billions monthly.
You will:
- Manage a team of highly effective researchers and engineers. Provide guidance, mentorship, and technical leadership to a team of researchers and engineers working on Generative AI projects. Develop and evaluate methods for integrating machine learning into human-in-the-loop labeling systems to ensure high-quality and throughput labels for our customers.
- Implement and improve on state-of-the-art models developed internally and from the community and put them into production to solve problems for our customers and taskers.
- Work with product and research teams to identify opportunities for improvement in our current product line and for enabling upcoming product lines.
- Work with massive datasets to develop both generic models as well as fine-tune models for specific products.
- Work with customers and 3rd party research groups to understand their goals and define how we can enable them.
- Build a scalable ML platform to automate our ML services, including automated model retraining and evaluation.
- Be able and willing to multi-task and learn new technologies quickly.
- Must be able to commute to the San Francisco Office 1-2x weekly.
Ideally you'd have:
- 7+ years of full time work experience using LLM, deep learning, deep reinforcement learning, or natural language processing in a production environment. Especially training foundational AI models through pre-training, fine-tuning, and RLHF.
- A vision for where the field should go and what Scale should do to enable it.
- Strong programming skills in Python, experience in PyTorch or Tensorflow
- Experience with MLOps and the automation of model training & evaluation
- Experience working with cloud technology stack (eg. AWS or GCP) and developing machine learning models in a cloud environment
- Solid background in algorithms, data structures, and object-oriented programming
- Deep appreciation for building high-quality, robust, reusable machine-learning software
- Degree in computer science or related field
Nice to haves:
- Graduate degree in Computer Science, Machine Learning or Artificial Intelligence specialization
- Publication experience in the field or related topics.
- Experience with model optimization techniques for both training and inference