Course Abstract

Training duration : 90 minutes

Explainable ML and AI (also known as XAI) is not only a booming field of research but is also widely needed across industries, such as healthcare, finance, and insurance, among others. There are a lot of approaches out there that provide certain level of explainability, as well as different packages and libraries. In this course, we will introduce (some of the more common and promising) approaches for ML explainability. We will also get a hands-on experience with the application of different XAI libraries, and get a feeling about their shortcomings and advantages.


Learning Objectives

  • Understand what we mean by ML explainability

  • What types of explainability exist

  • When to apply explainability (and when not)

  • Build an understanding of how to apply explainability approaches from RuleFit, Partial dependecy plots, Individual Conditional expectations to Global surrogate models, LIME and Shapley values.


Instructor Bio:

Violeta is a data scientist passionate about machine learning, with a focus on fairness and explainability of ML algorithms. She supplements her machine learning knowledge with her doctorate in applied econometrics and likes working on complex problems that require multi-disciplinary expertise. She regularly presents projects and initiatives she has worked on at conferences and is an advocate for diversity in the tech industry. Besides her data science job, she facilitates and regularly teaches data science and machine learning.

Violeta Misheva, PhD

Data Scientist | ABN AMRO Bank N.V.

Course Outline

Module 1: Explainability: nuts and bolts

Lesson 1. Who am I, why this course and requirements?

Lesson 2. What is explainability

Lesson 3. Why explainability and when not?

Lesson 4. Types of explanations

Lesson 5. Explainability in the ML development process

Module 2. Explainability with Python: visual approaches

Lesson 1. Introduction to the use case

Lesson 2. Transparent approaches and RuleFit

Lesson 3. Visual explanations: PDP

Lesson 4. Visual explanations: ICE plots

Exercise 1. Apply PDP and ICE plots

Module 3. Global explanations

Lesson 1. Global surrogate models

Lesson 2. Feature importances

Exercise 2. Develop Global surrogate model and feature importances

Module 4. Local explanations

Lesson 1. LIME

Lesson 2. Shapley values

Exercises 3. Apply LIME and Shap

Background knowledge

  • This course is for current or aspiring Data Scientists, Machine Learning Engineers, Software Engineers and AI Product Managers

  • Knowledge of following tools and concepts:

  • Intermediate Machine learning

  • Beginner to intermediate Python

Real-world applications

  • In mortgage industry, ML explainability is used to explain the model and its decisions to external regulators, to internal audit/model validation teams, to legal, compliance, privacy teams.

  • In case of models to predict whether a client has a certain disease, what explanation should you provide to the doctor to help them understand the decision of the model and/or trust it?

  • If you're trying to predict the sale price of a house, how can you help the business stakeholders understand the model, its behavior and decision-making, and thus ensure their trust and buy-in?trust and buy-in?