Principal AI Safety Researcher - AI Red Team

Microsoft, Can be based anywhere

Principal AI Safety Researcher - AI Red Team

Salary not available. View on company website.

Microsoft, Can be based anywhere

  • Full time
  • Permanent
  • Remote working

Posted today, 18 Oct | Get your application in now to be one of the first to apply.

Closing date: Closing date not specified

job Ref: 7501cc6c83014784b4e12776081928aa

Full Job Description

  • Research new and emerging threats to inform the organization
  • Discover and exploit Responsible AI vulnerabilities end-to-end in order to assess the safety of systems
  • Develop methodologies and techniques to scale and accelerate responsible AI Red Teaming
  • Collaborate with teams to influence measurement and mitigations of these vulnerabilities in AI systems
  • Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations

    Research experience especially in adversarial machine learning, or the intersection of machine learning and security
  • Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check:
  • This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
  • #MSFTSecurity #MSECAIR #airedteam

    Microsoft Corporation Principal AI Safety Researcher - AI Red Team in Multiple Locations, United Kingdom Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft's mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers' heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Do you have research experience in Adversarial Machine learning or AI Safety Research? Do you want to find failures in Microsoft's big bet AI systems impacting millions of users? Join Microsoft's AI Red Team where you will get to work alongside security experts to push the boundaries of AI Red Teaming. We are an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Responsible AI experts and software developers with the mission of proactively finding failures in Microsoft's big bet AI systems. Your work will impact Microsoft's AI portfolio including Phi series, Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. We are looking for a Principal Researcher with experience in adversarial machine learning or AI safety work to help make AI security better and help our customers expand with our AI systems. We have multiple openings and open to remote work. We have a strong focus on Open source and helping the community with our research, releasing tools such as Counterfit and PyRIT. Publishing papers is not required but encouraged in this group. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.