Karol Przystalski
Explainable machine learning explained
#1about 2 minutes
The growing importance of explainable AI in modern systems
Machine learning has become widespread, creating a critical need to understand how models make decisions beyond simple accuracy metrics.
#2about 4 minutes
Why regulated industries like medtech and fintech require explainability
In fields like medicine and finance, regulatory compliance and user trust make it mandatory to explain how AI models arrive at their conclusions.
#3about 3 minutes
Identifying the key stakeholders who need model explanations
Explainability is crucial for various roles, including domain experts like doctors, regulatory agencies, business leaders, data scientists, and end-users.
#4about 4 minutes
Fundamental approaches for explaining AI model behavior
Models can be explained through various methods such as mathematical formulas, visual charts, local examples, simplification, and analyzing feature relevance.
#5about 5 minutes
Learning from classic machine learning model failures
Examining famous failures, like the husky vs. wolf classification and the Tay chatbot, reveals how models can learn incorrect patterns from biased data.
#6about 5 minutes
Differentiating between white-box and black-box models
White-box models like decision trees are inherently transparent, whereas black-box models like neural networks require special techniques to interpret their internal workings.
#7about 7 minutes
Improving model performance with data-centric feature engineering
A data-centric approach, demonstrated with the Titanic dataset, shows how creating new features from existing data can significantly boost model accuracy.
#8about 4 minutes
Exploring inherently interpretable white-box models
Models such as logistic regression, k-means, decision trees, and SVMs are considered explainable by design due to their transparent decision-making processes.
#9about 5 minutes
Using methods like LIME and SHAP to explain black-box models
Techniques like Partial Dependence Plots (PDP), LIME, and SHAP are used to understand the influence of features on the predictions of complex black-box models.
#10about 3 minutes
Visualizing deep learning decisions in images with Grad-CAM
Grad-CAM (Gradient-weighted Class Activation Mapping) creates heatmaps to highlight which parts of an image were most influential for a deep neural network's classification.
#11about 3 minutes
Understanding security risks from adversarial attacks on models
Adversarial attacks demonstrate how small, often imperceptible, changes to input data can cause machine learning models to make completely wrong predictions.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
06:52 MIN
Using explainable AI to understand black box models
Solving the puzzle: Leveraging machine learning for effective root cause analysis
01:42 MIN
The importance of explainable AI and data quality
Confuse, Obfuscate, Disrupt: Using Adversarial Techniques for Better AI and True Anonymity
20:19 MIN
The four core principles of explainable AI
Model Governance and Explainable AI as tools for legal compliance and risk management
30:30 MIN
Q&A on model reliability and explainable AI
Getting Started with Machine Learning
25:48 MIN
Auditing AI systems using MLOps and explainability
Model Governance and Explainable AI as tools for legal compliance and risk management
29:29 MIN
Techniques for model interpretability and transparency
Augmented Intelligence for transport planning: Human in the Loop Modelling
16:29 MIN
Differentiating between model interpretability and explainability
Model Governance and Explainable AI as tools for legal compliance and risk management
08:51 MIN
How AI improves AML and the challenges involved
Detecting Money Laundering with AI
Featured Partners
Related Videos
Model Governance and Explainable AI as tools for legal compliance and risk management
Kilian Kluge & Isabel Bär
How AI Models Get Smarter
Ankit Patel
Panel discussion: Developing in an AI world - are we all demoted to reviewers? WeAreDevelopers WebDev & AI Day March2025
Laurie Voss, Rey Bango, Hannah Foxwell, Rizel Scarlett & Thomas Steiner
AI & Ethics
PJ Hagerty
The pitfalls of Deep Learning - When Neural Networks are not the solution
Adrian Spataru & Bohdan Andrusyak
Bringing the power of AI to your application.
Krzysztof Cieślak
How Machine Learning is turning the Automotive Industry upside down
Jan Zawadzki
Multimodal Generative AI Demystified
Ekaterina Sirazitdinova
From learning to earning
Jobs that call for the skills explored in this talk.





AI Engineer / Machine Learning Engineer / KI-Entwickler
Agenda GmbH
Remote
Intermediate
API
Azure
Python
Docker
+10

AIML -Machine Learning Research, DMLI
Apple
Python
PyTorch
TensorFlow
Machine Learning
Natural Language Processing

AI Engineer / Machine Learning Engineer / KI-Entwickler - Schwerpunkt Cloud & MLOps
Agenda GmbH
Intermediate
API
Azure
Python
Docker
PyTorch
+9

