About Scale:
Scale’s Generative AI ML team develops models and services to power high-quality data generation and evaluation for the most advanced large language models on earth. We also conduct applied research on model supervision and algorithmic approaches that support frontier models for Scale’s applied-ML teams and the broader AI community. Scale is uniquely positioned at the center of the AI ecosystem as a leading provider of training and evaluation data, end-to-end ML lifecycle solutions, and frontier evaluations for public and private institutions.
About The Role:
This role focuses on developing ML systems to automate data quality evaluation and generation using large language models. You’ll build scalable systems to assess quality across accuracy, instruction adherence, factuality, and reasoning — and design robust evaluation frameworks to ensure alignment with human standards. This is one of the highest impact areas in the company and directly accelerates the development of aligned, performant foundation models.
You’ll be deeply involved in the full lifecycle: from model design and fine-tuning, to prototyping, deployment, and monitoring. You’ll partner closely with engineering, research, and product teams to deliver cutting-edge solutions for both customers and internal GenAI data engines — Scale’s fastest-growing business.
If you’re excited about combining human-machine evaluation, scaling high-quality training data, and shaping the next generation of foundation models, we’d love to hear from you.
You will:
Ideally you’d have:
Nice to haves:
Get notifed when new similar jobs are uploaded