About Job
Software is eating the world, but AI is eating software. We live in unprecedented times – AI has the potential to exponentially augment human intelligence. As the world adjusts to this new reality, leading tech companies are racing to build LLMs at billion dollar scale, while large enterprises figure out how to add it to their products. To ensure that these models are safe, aligned, and highly useful, they require extremely high quality human-generated data and evaluation. Since before the launch of ChatGPT, through to the latest generation of frontier models coming out today, Scale has been at the forefront of providing the post-training, fine-tuning, and human preference alignment (RLHF) data needed to ensure these models are capable, aligned, and useful via our Generative AI Data Engine. The data we are producing is some of the most important work for how humanity will interact with AI.
As customers train their models on this data, and constantly aim to improve them, a critical need is having trustworthy evaluations of model performance, and an ability to identify weaknesses and potential vulnerabilities. Conducting these evaluations with our human experts constitutes a significant and growing portion of Scale’s work—thus assisting model developers in iteratively understanding where to focus their technical investments.
The GenAI Safety & Evaluation product team at Scale is at the heart of this work, building a world-class customer-facing model evaluation platform. This platform enables customers to easily launch new evaluation workflows, deep dive into evaluation results down to the test case level to understand weaknesses and benchmark performance, and use these insights to drive model development roadmaps. In building this product, you will have a chance to shape the way that models across the industry are evaluated, impacting billions of people around the world. And as a newer product at Scale, you will have the opportunity to build something impactful from the ground up.
As part of the Safety & Evaluation product team, you will partner closely with researchers from Scale’s Safety, Evaluations, and Alignment Lab (SEAL) on productization of novel research, as well as Scale’s expert red team, which supports AI safety via rigorous model testing trusted by the White House, major enterprises, and leading model developers.
We’re looking for entrepreneurial Software Engineers to join our team. In this role, you'll be given the opportunity to build these products and drive millions of dollars in revenue. You’ll also get widespread exposure to the forefront of the AI race as Scale sees it in enterprises, startups, governments, and large tech companies.
The ideal person is a natural entrepreneurial engineer who can take an ambiguous scope and lead the execution of outcomes, doing what it takes to hit them including both backend and frontend coding, defining requirements, coordinating with other eng and operations teams at Scale, etc. We strongly believe the best engineers own outcomes and deeply understand customer problems.
You will:
Ideally you'd have:
Nice to haves: