Model Governance and Explainable AI as tools for legal compliance and risk management
Kilian Kluge & Isabel Bär - 3 years ago
On their journey towards machine learning (ML) in production, organizations often solely focus on MLOps, building the technical infrastructure and processes for model training, serving, and monitoring. However, as ML-based systems are increasingly employed in business-critical applications, ensuring their trustworthiness and legal compliance becomes paramount. To this end, highly complex “black box” AI systems pose a particular challenge.
Using the example of ML-based recruiting tools, we show how even seemingly innocuous applications can carry significant risks. Then, we demonstrate how organizations can utilize Model Governance and Explainable AI to manage them by enabling stakeholders such as management, non-technical employees, and auditors to assess the performance of AI systems and their compliance with both business and regulatory requirements.
Jobs with related skills

Lead Software Engineer Data Intelligence (w/m/d)
BWI GmbH
·
1 month ago
Essen, Germany
Hybrid

Senior Machine Learning Engineer (m/w/d)
BWI GmbH
·
18 days ago
Newest jobs

IT Database & Application Administrator (m/w/d)
Wilken GmbH
·
yesterday
Ulm, Germany
Hybrid

DevOps Engineer (f/m/x)
Raiffeisen Bank International AG
·
3 days ago
Vienna, Austria
Hybrid
Related Videos