About Scale
At Scale AI, our mission is to accelerate the development of AI applications. For 8 years, Scale has been the leading AI data foundry, helping fuel the most exciting advancements in AI, including: generative AI, defense applications, and autonomous vehicles. With our recent Series F round, we’re accelerating the abundance of frontier data to pave the road to Artificial General Intelligence (AGI), and building upon our prior model evaluation work with enterprise customers and governments to deepen our capabilities and offerings for both public and private evaluations.
About This Role
This role will lead the development of machine learning systems to detect fraud, abuse, and trust violations across Scale’s contributor platform. As a core part of our Generative AI data engine, these systems are critical to ensuring the quality, safety, and reliability of the data used to train and evaluate frontier models.
You will build scalable ML services that analyze behavioral and content signals, incorporating both classical models and advanced LLM-based techniques. This is a high-impact, product-focused role where you’ll collaborate across engineering, product, and operations teams to proactively surface misuse, defend against adversarial behavior, and ensure the long-term health of our human-in-the-loop data workflows.
If you’re excited about solving complex detection problems at scale, combining LLMs with structured ML approaches, and protecting the integrity of AI training data, we’d love to hear from you.
You will:
Ideally you’d have:
Nice to have:
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded