As the leading data and evaluation partner for frontier AI companies, Scale is dedicated to advancing the evaluation and benchmarking of large language models (LLMs). We are building industry-leading LLM leaderboards, setting new standards for model performance assessment. Our mission is to develop rigorous, scalable, and fair evaluation methodologies to drive the next generation of AI capabilities.
We are seeking Research Scientists and Research Engineers with expertise in LLM evaluation. You will play a key role in developing and implementing novel evaluation methodologies, metrics, and benchmarks to assess the capabilities and limitations of our cutting-edge LLMs. We encourage collaborations within the industry and academia, and support the publication of research findings. Successful candidates will partner with top foundation model labs, providing both technical and strategic input on the development of the next generation of generative AI models.
You will:
Ideally you’d have:
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded