Rebekka Weiss & Tobi Müller

Responsible AI @ Microsoft - Governance, Standards, Learnings

How does Microsoft govern AI as both a powerful tool and a potential weapon? See the framework they use to move from abstract principles to concrete practice.

Responsible AI @ Microsoft - Governance, Standards, Learnings
#1about 2 minutes

Responsible AI is more than just legal compliance

Regulation shapes AI governance, but true responsibility also involves code, shared learnings, and operational standards.

#2about 3 minutes

Establishing foundational principles for AI development

Microsoft's AI strategy began by proactively addressing risks and establishing core principles like fairness, privacy, and security.

#3about 3 minutes

Addressing new risk vectors in generative AI

Generative AI introduces unique risks like jailbreaks, prompt injection, and harmful content that require continuous mitigation efforts.

#4about 2 minutes

Using the NIST framework to structure AI risk management

The NIST AI framework provides a standardized approach to map, measure, manage, and govern AI risks effectively.

#5about 4 minutes

Building a diverse and collaborative governance model

Microsoft's hub-and-spokes model for responsible AI relies on diverse teams and thorough documentation to share learnings and prevent duplicated efforts.

#6about 3 minutes

Structuring the office of responsible AI for impact

The Office of Responsible AI is organized into three key pillars—engineering, policy, and research—to address the multifaceted nature of AI governance.

#7about 3 minutes

Securing systems with red teaming and data governance

Proactive security measures like large-scale red teaming and robust data governance are essential for protecting AI systems and user privacy.

#8about 2 minutes

Scaling risk identification from manual to automated testing

The risk assessment process begins with manual human testing to understand user behavior, which is then scaled using automated tools like PyRIT.

#9about 4 minutes

Prioritizing human oversight with layered safety mitigations

The "Copilot, not Autopilot" philosophy emphasizes irreplaceable human review and a multi-layered mitigation strategy for safe human-AI interaction.

#10about 2 minutes

Ensuring product readiness and shaping global AI standards

A rigorous pre-launch assessment ensures product safety, while advocacy for global standards aims to make AI beneficial for everyone.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.