Senior AI Security Researcher
![]() | |
![]() United States, Washington, Redmond | |
![]() | |
OverviewSecurity represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft's mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers' heterogeneous environments, as well as ensuring the security of our own internal estate. The Microsoft Security Response Center (MSRC) is looking for researchers to join us in protecting AI systems, and the users of these systems, from threats to security and privacy. This role offers a unique opportunity to solve real-world security and privacy challenges, through cutting-edge scientific research, informed by vulnerability data from production AI systems, leading to mitigations or new techniques that can be deployed at Microsoft and beyond. MSRC is part of the defender community and on the front line of security response evolution. Our mission is to protect employees, customers, communities, and Microsoft from threats to privacy and security.Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
ResponsibilitiesResearch: The successful candidate will undertake research in the field of security and privacy for AI. Specific topics of interest include, but are not limited to:Investigating new security and privacy vulnerabilities in AI systems.Designing scalable mitigations for threats to AI systems, and working with product groups to drive these into product features.Security Response: The successful candidate will work to assess reported security vulnerabilities in deployed AI systems, including:Analyzing and assessing the severity of reported vulnerabilities.Identifying new research opportunities based on vulnerability trends.The successful candidate will collaborate cross-organizationally representing the One Microsoft model. They will work with security researchers and product groups to identify emerging threats to AI systems, and to design, implement, evaluate, and deploy mitigations for these threats. They will share their findings with security responders and product groups, as well as via publications in recognized academic venues. Embody our culture and values. |