Karol Przystalski
Explainable machine learning explained
#1about 2 minutes
The growing importance of explainable AI in modern systems
Machine learning has become widespread, creating a critical need to understand how models make decisions beyond simple accuracy metrics.
#2about 4 minutes
Why regulated industries like medtech and fintech require explainability
In fields like medicine and finance, regulatory compliance and user trust make it mandatory to explain how AI models arrive at their conclusions.
#3about 3 minutes
Identifying the key stakeholders who need model explanations
Explainability is crucial for various roles, including domain experts like doctors, regulatory agencies, business leaders, data scientists, and end-users.
#4about 4 minutes
Fundamental approaches for explaining AI model behavior
Models can be explained through various methods such as mathematical formulas, visual charts, local examples, simplification, and analyzing feature relevance.
#5about 5 minutes
Learning from classic machine learning model failures
Examining famous failures, like the husky vs. wolf classification and the Tay chatbot, reveals how models can learn incorrect patterns from biased data.
#6about 5 minutes
Differentiating between white-box and black-box models
White-box models like decision trees are inherently transparent, whereas black-box models like neural networks require special techniques to interpret their internal workings.
#7about 7 minutes
Improving model performance with data-centric feature engineering
A data-centric approach, demonstrated with the Titanic dataset, shows how creating new features from existing data can significantly boost model accuracy.
#8about 4 minutes
Exploring inherently interpretable white-box models
Models such as logistic regression, k-means, decision trees, and SVMs are considered explainable by design due to their transparent decision-making processes.
#9about 5 minutes
Using methods like LIME and SHAP to explain black-box models
Techniques like Partial Dependence Plots (PDP), LIME, and SHAP are used to understand the influence of features on the predictions of complex black-box models.
#10about 3 minutes
Visualizing deep learning decisions in images with Grad-CAM
Grad-CAM (Gradient-weighted Class Activation Mapping) creates heatmaps to highlight which parts of an image were most influential for a deep neural network's classification.
#11about 3 minutes
Understanding security risks from adversarial attacks on models
Adversarial attacks demonstrate how small, often imperceptible, changes to input data can cause machine learning models to make completely wrong predictions.
Related jobs
Jobs that call for the skills explored in this talk.
Picnic Technologies B.V.
Amsterdam, Netherlands
Intermediate
Senior
Python
Structured Query Language (SQL)
+1
WALTER GROUP
Wiener Neudorf, Austria
Intermediate
Senior
Python
Data Vizualization
+1
Matching moments
04:57 MIN
Increasing the value of talk recordings post-event
Cat Herding with Lions and Tigers - Christian Heilmann
03:28 MIN
Why corporate AI adoption lags behind the hype
What 2025 Taught Us: A Year-End Special with Hung Lee
03:15 MIN
The future of recruiting beyond talent acquisition
What 2025 Taught Us: A Year-End Special with Hung Lee
03:48 MIN
Automating formal processes risks losing informal human value
What 2025 Taught Us: A Year-End Special with Hung Lee
04:27 MIN
Moving beyond headcount to solve business problems
What 2025 Taught Us: A Year-End Special with Hung Lee
05:18 MIN
Incentivizing automation with a 'keep what you kill' policy
What 2025 Taught Us: A Year-End Special with Hung Lee
04:22 MIN
Why HR struggles with technology implementation and adoption
What 2025 Taught Us: A Year-End Special with Hung Lee
14:06 MIN
Exploring the role and ethics of AI in gaming
Devs vs. Marketers, COBOL and Copilot, Make Live Coding Easy and more - The Best of LIVE 2025 - Part 3
Featured Partners
Related Videos
Model Governance and Explainable AI as tools for legal compliance and risk management
Kilian Kluge & Isabel Bär
How AI Models Get Smarter
Ankit Patel
Panel discussion: Developing in an AI world - are we all demoted to reviewers? WeAreDevelopers WebDev & AI Day March2025
Laurie Voss, Rey Bango, Hannah Foxwell, Rizel Scarlett & Thomas Steiner
The pitfalls of Deep Learning - When Neural Networks are not the solution
Adrian Spataru & Bohdan Andrusyak
AI & Ethics
PJ Hagerty
Detecting Money Laundering with AI
Stefan Donsa & Lukas Alber
Bringing the power of AI to your application.
Krzysztof Cieślak
How Machine Learning is turning the Automotive Industry upside down
Jan Zawadzki
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.


10xEngineers
Remote
Senior
API
C++
Python
PyTorch
+4

Jack & Jill\u002FExternal ATS
Remote
Python
PyTorch
TensorFlow
Machine Learning
+1


Agenda GmbH
Rosenheim, Germany
Intermediate
API
Azure
Python
Docker
PyTorch
+9

Agenda GmbH
Raubling, Germany
Remote
Intermediate
API
Azure
Python
Docker
+10

Manychat
Barcelona, Spain
Intermediate
Python
Docker
PyTorch
FastAPI
PostgreSQL
+3


Jack & Jill\u002FExternal ATS
Charing Cross, United Kingdom
Python
PyTorch
TensorFlow
Machine Learning