Model Red-teaming Analyst

4 Minutes ago • All levels

Job Summary

Job Description

As a core member of the LLM Global Data Team, you will design and lead red-teaming projects to uncover vulnerabilities in multimodal AI systems. This involves coordinating with internal and external teams, analyzing outputs to identify failure modes and safety risks, and translating findings into actionable insights for harm mitigation and model alignment. You will also research the latest in AI safety and adversarial testing to propose novel stress-testing approaches.
Must have:
  • Design and drive comprehensive red-teaming projects.
  • Coordinate efforts with internal and external collaborators.
  • Analyze red-teaming outputs to identify failure modes and safety risks.
  • Translate findings into actionable insights for harm mitigation.
  • Collaborate with model development, safety, and policy teams.
  • Research latest AI safety, adversarial testing, and responsible AI practices.
  • Propose novel approaches to stress-test AI models.
  • Bachelor's degree or higher in relevant field.
  • Exceptional English and Mandarin proficiency (written and verbal).
  • Demonstrated analytical thinking and data synthesis.
  • Solid project management and cross-functional communication.
  • Foundational understanding of large AI models and AI safety.
Good to have:
  • Interest in or experience with reviewing technical literature (model/system cards, red-teaming reports, AI alignment research).
  • Self-motivated, intellectually curious, detail-oriented, and collaborative, with a strong sense of ownership.
  • Awareness of emerging safety and alignment challenges related to frontier AI systems and high-capability models.
Perks:
  • Support resources and resilience training for employee well-being.

Job Details

Responsibilities

About the team

As a core member of our LLM Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets.

Job Responsibilities

  • Design and drive comprehensive red-teaming projects aimed at uncovering vulnerabilities in multimodal systems. Coordinate efforts across internal teams and external collaborators, including academic researchers, third-party red teamers, and industry partners.
  • Systematically analyze red-teaming outputs to identify failure modes, behavioral inconsistencies, and safety risks. Translate these findings into actionable insights to inform harm mitigation strategies, model alignment techniques, and product safety improvements.
  • Work closely with model development, safety, and policy teams to ensure red-teaming insights are integrated into training data curation, model safety evaluation frameworks, and deployment practices.
  • Conduct research on the latest developments in AI safety, adversarial testing, red-teaming methodologies, and responsible AI practices across academia and industry. Proactively identify limitations in existing evaluation paradigms and propose novel approaches to stress-test models under real-world and edge-case scenarios.

Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:

  • Hate speech or harassment
  • Self-harm or suicide-related content
  • Violence or cruelty
  • Child safety

Support resources and resilience training will be provided to support employee well-being.

Qualifications

Minimum Qualifications

  • Bachelor's degree or higher in a relevant field (e.g., Computer Science, Engineering, Public Policy, or related disciplines).
  • Exceptional proficiency in both English and Mandarin, with strong written and verbal communication skills required to collaborate with internal teams and stakeholders across English and Mandarin-speaking regions.
  • Demonstrated analytical thinking, with the ability to synthesize both quantitative and qualitative data to draw meaningful insights.
  • Solid project management capabilities and effective cross-functional communication skills.
  • Foundational understanding of large AI models and familiarity with key industry practices in AI safety and responsible AI development.

Preferred Qualifications

  • Interest in or experience with reviewing technical literature, such as model or system cards, red-teaming reports, or AI alignment research.
  • Self-motivated, intellectually curious, detail-oriented, and collaborative, with a strong sense of ownership.
  • Awareness of emerging safety and alignment challenges related to frontier AI systems and high-capability models.

Job Information

About Doubao (Seed)

Founded in 2023, the Doubao (Seed) Team, is dedicated to pioneering advanced AI foundation models. Our goal is to lead in cutting-edge research and drive technological and societal advancements.

With a strong commitment to AI, our research areas span deep learning, reinforcement learning, Language, Vision, Audio, AI Infra and AI Safety. Our team has labs and research positions across China, Singapore, and the US.

Why Join

Inspiring creativity is at the core of our mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.

As Dancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.

Diversity & Inclusion

We are committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At , our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

Click to refer

Apply

Similar Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Skill Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Jobs in Singapore

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

Similar Category Jobs

Looks like we're out of matches

Set up an alert and we'll send you similar jobs the moment they appear!

About The Company

Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
View All Jobs

Get notified when new jobs are added by bytedance

Level Up Your Career in Game Development!

Transform Your Passion into Profession with Our Comprehensive Courses for Aspiring Game Developers.

Job Common Plug