As the leading data and evaluation partner for frontier AI companies, Scale plays an integral role in understanding the capabilities and safeguarding large language models (LLMs). Safety, Evaluations and Alignment Lab (SEAL) is Scale’s new frontier research effort dedicated to building robust evaluation products and tackling the challenging research problems in evaluation and red teaming.
We are actively seeking talented researchers to join us in shaping the landscape for safety and transparency for the entire AI industry. We support collaborations across the industry and the publication of our research findings. Below is a list of SEAL’s representative projects:
Ideally you’d have:
Nice to have:
Our research interviews are crafted to assess candidates' skills in practical ML prototyping and debugging, their grasp of research concepts, and their alignment with our organizational culture. We will not ask any LeetCode-style questions.
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded