Principal Researcher: AI Trust & Safety

2 Hours ago • 4-6 Years • Artificial Intelligence

About the job

Job Description

Microsoft's AI Red Team seeks a Principal Researcher: AI Trust & Safety to proactively identify vulnerabilities in large AI systems like Bing Copilot, Github Copilot, etc. Responsibilities include discovering and exploiting GenAI vulnerabilities, developing novel red teaming methodologies, collaborating with research and tooling teams, researching emerging threats, and working with offensive security engineers and adversarial ML experts. The role demands experience in trust and safety, responsible AI, and ideally, national security or experience with AI-generated harmful content. This is a sprint-based role requiring communication of findings to various stakeholders and influencing mitigation strategies. The ideal candidate will help improve the operational effectiveness of the AI Red Team.
Must have:
  • Discover and exploit GenAI vulnerabilities
  • Develop novel red teaming methodologies
  • Collaborate with research and tooling teams
  • Experience in Trust and Safety or Responsible AI
  • Excellent communication and collaboration skills
Good to have:
  • Experience in national security
  • Multilingual proficiency
  • Experience in red teaming events (GRT, LLM CTFs)
  • Basic/intermediate Python programming
Perks:
  • Industry leading healthcare
  • Educational resources
  • Discounts on products and services
  • Savings and investments
  • Maternity and paternity leave
  • Generous time away
  • Giving programs
  • Networking opportunities

Overview

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.

Do you want to find responsible AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security experts to cause trust and safety failures in Microsoft’s big AI systems. We are looking for a Principal Researcher: AI Trust & Safety focused with tactical experience in trust, safety, or responsible AI for our team to help make AI security better and help our customers expand with our AI systems. Our team is an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Safety & Responsible AI experts and software developers with the mission of proactively finding failures in Microsoft’s big bet AI systems. In this role, you will be an individual contributor, red teaming AI models and applications across Microsoft’s AI portfolio including Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. This work is sprint based, working with AI Safety and Product Development teams, to run operations that aim to find safety and security risks that inform internal key business decisions. As a Principal Researcher: AI Trust & Safety, you will have the latitude to define emerging threat areas for the company, responsible for operations delivery, communicating impact of findings to diverse internal stakeholders, and pushing cross-team efforts to improve operational effectiveness. This Principal Researcher: AI Trust & Safety role is also expected to bring insight into the AI Safety space and collaborate closely with our tooling and research teams to orchestrate innovation with the operations testing team. We are open to remote work. More about our approach to AI Red Teaming:

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

 

 

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Qualifications

Required/Minimum Qualifications

  • Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
    • OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)
    • OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
    • OR equivalent experience.


Other Requirements:
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:

  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Preferred Qualifications:

  • Experience in Trust and Safety area or policy area, especially working with human-generated or AI-generated harmful materials across multiple media types and creating objective recommendations, grounded and supported by research and data.
  • Experience in national security, specifically CBRN
  • Preferred multilingual proficiency, especially in languages used in Microsoft userbase in Europe, EMEA and APAC region
  • Prior experience in red teaming events such as GRT at Defcon AI Village or other LLM CTFs
  • While extensive coding experience is not necessary, candidate should be comfortable with basic/intermediate Python programming
  • 1+ years' experience in a field related to Responsible AI including but not limited to ethics, chemistry, biology, linguistics, sociology, psychology, medicine, socio-technical safety space, online safety, privacy

Applied Sciences IC5 - The typical base pay range for this role across the U.S. is USD $137,600 - $267,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $180,400 - $294,000 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:

Microsoft will accept applications for the role until January 10, 2025.

 

 

#MSFTSecurity #airedteam #MSECAIR #AEGIS

 

Responsibilities

  • Discover and exploit GenAI vulnerabilities with respect to end-to-end capabilities in order to assess the safety of systems and lead communication of impact of vulnerabilities to partner stakeholders
  • Develop novel methodologies and techniques to scale and accelerate AI Red Teaming in collaboration with our research team, our tooling team, and leaders in the Microsoft AI Safety & Security ecosystem
  • Collaborate with teams to influence measurement and mitigations of these vulnerabilities in AI systems
  • Research new and emerging threats to inform the organization and craft solutions to operationalize testing within the AI Red Team function
  • Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations -Model and coach other red teamers on functional strategies for effective delivery, communication, and prioritization in fast moving environments.
  • Embody our and  
Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
Industry leading healthcare
Educational resources
Discounts on products and services
Savings and investments
Maternity and paternity leave
Generous time away
Giving programs
Opportunities to network and connect
View Full Job Description
$137.6K - $294.0K/yr (Outscal est.)
$215.8K/yr avg.
Redmond, Washington, United States

Add your resume

80%

Upload your resume, increase your shortlisting chances by 80%

About The Company

Microsoft is a tech giant that develops, licenses, and supports a range of software products, services, and devices.

London, England, United Kingdom (On-Site)

Dublin, County Dublin, Ireland (On-Site)

Ho Chi Minh City, Ho Chi Minh City, Vietnam (On-Site)

San José, San José Province, Costa Rica (On-Site)

Prague, Prague, Czechia (On-Site)

View All Jobs

Get notified when new jobs are added by Microsoft

Similar Jobs

Wargaming - DevOps Engineer

Wargaming, Serbia (On-Site)

Sumo Logic - Senior Software Engineer II, QE - ML/AI

Sumo Logic, India (On-Site)

CloudHire - Senior Python Developer

CloudHire, India (Remote)

PwC - Backend Solution Architect

PwC, Czechia (Hybrid)

Enphase Energy - Sr Software Engineer

Enphase Energy, India (On-Site)

Unity - Senior Data Scientist, iAds

Unity, Finland (On-Site)

GoMotive - Senior Computer Vision Engineer

GoMotive, India (Remote)

Pika - Senior Research Engineer (Data)

Pika, United States (On-Site)

Get notifed when new similar jobs are uploaded

Similar Skill Jobs

Get notifed when new similar jobs are uploaded

Jobs in Redmond, Washington, United States

Google - Software Engineer III, Network Infrastructure

Google, United States (On-Site)

Studio Wildcard - Engine Programmer

Studio Wildcard, United States (On-Site)

Highspot - VP, Product Management

Highspot, United States (Hybrid)

Beghou Consulting - Sr. Consultant

Beghou Consulting, United States (Hybrid)

Flow - Senior Graphic Designer / Illustrator

Flow, United States (On-Site)

Intel Corporation - Design Quality and Reliability Engineer

Intel Corporation, United States (Hybrid)

Company3 Method Studios - Facility Technician (7:00am - 3:30pm PT)

Company3 Method Studios, United States (On-Site)

Meta - Data Engineer Intern

Meta, United States (On-Site)

Blizzard Entertainment - Project Manager, Operations

Blizzard Entertainment, United States (Hybrid)

IGT - Computer Operator I - DCA

IGT, United States (On-Site)

Get notifed when new similar jobs are uploaded

Artificial Intelligence Jobs

Get notifed when new similar jobs are uploaded