About Scale
At Scale AI, our mission is to accelerate the development of AI applications. For 8 years, Scale has been the leading AI data foundry, fueling the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles. With our recent Series F round, we’re amplifying access to high-quality data to drive progress toward Artificial General Intelligence (AGI). Building on our history of model evaluation with enterprise and government customers, we are expanding our capabilities to set new standards for both public and private evaluations.
About This Role
This role operates at the forefront of AI research and real-world implementation, with a strong focus on reasoning within large language models (LLMs). The ideal candidate will study the data types critical for advancing LLM-based agents, including browser and software engineering (SWE) agents. You will play a key role in shaping Scale’s data strategy by identifying the most effective data sources and methodologies for improving LLM reasoning. Success in this role requires a deep understanding of LLMs, planning algorithms, and novel approaches to agentic reasoning, as well as creativity in tackling challenges related to data generation, model interaction, and evaluation. You will contribute to impactful research on language model reasoning, collaborate with external researchers, and work closely with engineering teams to bring state-of-the-art advancements into scalable, real-world solutions.
Ideally, you’d have:
Practical experience working with LLMs, with proficiency in frameworks like PyTorch, JAX, or TensorFlow. You should also be skilled at rapidly interpreting research literature and turning new ideas into working prototypes.
A track record of published research in top ML and NLP venues (e.g., ACL, EMNLP, NAACL, NeurIPS, ICML, ICLR, CoLLM, etc.).
At least three years of experience solving complex ML challenges, either in a research setting or product development, particularly in areas related to LLM capabilities and reasoning.
Strong written and verbal communication skills, along with the ability to work effectively across teams.
Nice to have:
Hands-on experience fine-tuning open-source LLMs or leading bespoke LLM fine-tuning projects using PyTorch/JAX.
Research and practical experience in building applications and evaluations related to LLM-based agents, including tool-use, text-to-SQL, browser agents, coding agents, and GUI agents.
Experience with agent frameworks such as OpenHands, Swarm, LangGraph, or similar.
Familiarity with advanced agentic reasoning techniques such as STaR and PLANSEARCH.
Proficiency in cloud-based ML development, with experience in AWS or GCP environments.
Our research interviews are designed to assess candidates' ability to prototype and debug ML models, their depth of understanding in research concepts, and their alignment with our organizational culture. We do not conduct LeetCode-style problem-solving assessments.
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded