AI Research Engineer, Enterprise Evaluations
Scale AI
Job Summary
Scale AI is seeking an AI Research Engineer for their Enterprise Evaluations team to develop the industry's leading GenAI Evaluation Suite. This role involves contributing to core systems for safety, reliability, and improvement of LLM-powered workflows. The ideal candidate has strong LLM knowledge, a passion for evaluation challenges, and can integrate novel research into evaluation systems, focusing on human-rated datasets, autorater frameworks, and advanced analysis methodologies for enterprise agents.
Must Have
- Partner with Scale’s Operations team and enterprise customers to translate ambiguity into structured evaluation data.
- Guide the creation and maintenance of gold-standard human-rated datasets and expert rubrics.
- Design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems.
- Pursue research initiatives that explore new methodologies for automatically analyzing, evaluating, and improving the behavior of enterprise agents.
- Bachelor’s degree in Computer Science, Electrical Engineering, or a related field.
- 2+ years of experience in Machine Learning or Applied Research.
- Hands-on experience with Large Language Models (LLMs) and Generative AI.
- Strong understanding of frontier model evaluation methodologies.
- Proficiency in Python and major ML frameworks (e.g., PyTorch, TensorFlow).
- Solid engineering and statistical analysis foundation.
Good to Have
- Advanced degree (Master’s or Ph.D.) in Computer Science, Machine Learning, or a related quantitative field.
- Published research in leading ML or AI conferences such as NeurIPS, ICML, ICLR, or KDD.
- Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems for complex models.
- Experience collaborating with operations or external teams to define high-quality human annotator guidelines.
- Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis.
- Experience contributing to scalable pipelines that automate the evaluation and monitoring of large-scale models and agents.
- Familiarity with distributed computing frameworks and modern cloud infrastructure.
Perks & Benefits
- Comprehensive health, dental and vision coverage
- Retirement benefits
- Learning and development stipend
- Generous PTO
- Equity based compensation
- Commuter stipend (may be eligible)
Job Description
Scale AI is seeking a technically rigorous and driven AI Research Engineer to join our Enterprise Evaluations team. This high-impact role is critical to our mission of delivering the industry's leading GenAI Evaluation Suite. You will be a hands-on contributor to the core systems that ensure the safety, reliability, and continuous improvement of LLM-powered workflows and agents for the enterprise.
The ideal candidate has a strong foundational knowledge of large language models, a passion for tackling complex evaluation challenges, and thrives in a dynamic, fast-paced research environment. We are looking for an engineer who can think outside the box, stays current with the latest literature in AI evaluation, and is passionate about integrating novel research ideas into our workflows to build best-in-class evaluation systems.
Responsibilities
- Partner with Scale’s Operations team and enterprise customers to translate ambiguity into structured evaluation data, guiding the creation and maintenance of gold-standard human-rated datasets and expert rubrics that anchor AI evaluation systems.
- Analyze feedback and collected data to identify patterns, refine evaluation frameworks, and establish iterative improvement loops that enhance the quality and relevance of human-curated assessments.
- Design, research, and develop LLM-as-a-Judge autorater frameworks and AI-assisted evaluation systems. This includes creating models that critique, grade, and explain agent outputs (e.g., RLAIF, model-judging-model setups), along with scalable evaluation pipelines and diagnostic tools.
- Pursue research initiatives that explore new methodologies for automatically analyzing, evaluating, and improving the behavior of enterprise agents, pushing the boundaries of how AI systems are assessed and optimized in real-world contexts.
Basic Qualifications
- Bachelor’s degree in Computer Science, Electrical Engineering, a related field, or equivalent practical experience.
- 2+ years of experience in Machine Learning or Applied Research, focused on applied ML systems or evaluation infrastructure.
- Hands-on experience with Large Language Models (LLMs) and Generative AI in professional or research environments.
- Strong understanding of frontier model evaluation methodologies and the current research landscape.
- Proficiency in Python and major ML frameworks (e.g., PyTorch, TensorFlow).
- Solid engineering and statistical analysis foundation, with experience developing data-driven methods for assessing model quality.
Preferred Qualifications
- Advanced degree (Master’s or Ph.D.) in Computer Science, Machine Learning, or a related quantitative field.
- Published research in leading ML or AI conferences such as NeurIPS, ICML, ICLR, or KDD.
- Experience designing, building, or deploying LLM-as-a-Judge frameworks or other automated evaluation systems for complex models.
- Experience collaborating with operations or external teams to define high-quality human annotator guidelines.
- Expertise in ML research engineering, stochastic systems, observability, or LLM-powered applications for model evaluation and analysis.
- Experience contributing to scalable pipelines that automate the evaluation and monitoring of large-scale models and agents.
- Familiarity with distributed computing frameworks and modern cloud infrastructure.