Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Do you have research experience in Adversarial Machine Learning or AI Safety Research? Do you want to find failures in and design mitigations for Microsoft’s big bet AI systems impacting millions of users? Join Microsoft’s AI Red Team’s Long-Term Ops and Research wing, where you will get to work alongside security experts to push the boundaries of AI Red Teaming. We are looking for an early-career researcher with experience in mitigations, adversarial machine learning, and/or AI safety to help make Microsoft's AI products safer and help our customers expand with our AI systems. We are an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Responsible AI experts, and software developers with the mission of proactively finding failures in Microsoft’s big bet AI systems. Your work will impact Microsoft’s AI portfolio, including Phi series, Bing Copilot, Security Copilot, GitHub Copilot, Office Copilot, Windows Copilot and Azure OpenAI. We are open to remote work. More about our approach to AI Red Teaming:
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Required Qualifications:
Other Requirements
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:
Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred Qualifications:
Applied Sciences IC3 - The typical base pay range for this role across the U.S. is USD $98,300 - $193,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $127,200 - $208,800 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
Microsoft will accept applications for the role until January 27, 2025.
#MSFTSecurity #AI #RAI #Safety #Security #MSECAIR #AEGIS #airedteam #airt
Upload your resume, increase your shortlisting chances by 80%
Get notified when new jobs are added by Microsoft
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded
Get notifed when new similar jobs are uploaded