Course Abstract

Training duration : 90 minutes

Explainable ML and AI (also known as XAI) is not only a booming field of research but is also widely needed across industries, such as healthcare, finance, and insurance, among others. There are a lot of approaches out there that provide certain level of explainability, as well as different packages and libraries. In this course, we will introduce (some of the more common and promising) approaches for ML explainability. We will also get a hands-on experience with the application of different XAI libraries, and get a feeling about their shortcomings and advantages.

DIFFICULTY LEVEL: INTERMEDIATE

Learning Objectives

  • Understand what we mean by ML explainability

  • What types of explainability exist

  • When to apply explainability (and when not)

  • Build an understanding of how to apply explainability approaches from RuleFit, Partial dependecy plots, Individual Conditional expectations to Global surrogate models, LIME and Shapley values.

Instructor

Instructor Bio:

Data Scientist | ABN AMRO Bank N.V.

Violeta Misheva, PhD

Violeta is a data scientist passionate about machine learning, with a focus on fairness and explainability of ML algorithms. She supplements her machine learning knowledge with her doctorate in applied econometrics and likes working on complex problems that require multi-disciplinary expertise. She regularly presents projects and initiatives she has worked on at conferences and is an advocate for diversity in the tech industry. Besides her data science job, she facilitates and regularly teaches data science and machine learning.

Course Outline

In the course, we will use a classic machine learning dataset and explain the decision and predictions of a black-box model. We will start with a brief theoretical introduction of the different approaches to explainability and what each of them is best suited for. The majority of the session will be a hands-on demonstration of many of these approaches.

In the first part, we will talk about visual methods to explain a model. We will construct partial dependency plots and individual conditional expectations. They are a valuable and quite intuitive way to gain an initial understanding of the model's behavior.

In the second part, we will look at a couple of approaches to explain the global model behavior, such as feature permutations and global surrogate models. We will discuss in what settings they are best applicable, as well as highlight some limitations.

Last but not least, in the third part, we will walk through some local approaches to explainability. These methods attempt to explain the decision of the model for each instance in a data set and are therefore in high demand in many business settings and domains. We will compare discuss methods such as LIME and Shapley values.

The landscape of XAI is developing rapidly and this is just a selection of some of the more promising and popular current approaches and packages. By the end of the course, you will have a good grasp of what we mean by explainability, what ways there are to provide explanations of models, and most of all, what are some more mature and stable packages you can use to explain your machine learning models in Python.

Background knowledge

  • Intermediate Machine learning, beginner to intermediate Python

Real-world applications

  • Imagine you have built a model to decide whether to grant a client a mortgage. You need to provide an explanation to the client if they want to dispute the model'd decision; you need to explain the model/its decisions to external regulators, to internal audit/model validation teams, to legal, compliance, privacy teams.

  • You have build a model to predict whether a client has a certain disease. What explanation should you provide to the doctor to help them understand the decision of the model and/or trust it?

  • You have trained a model to predict the sale price of a house - a setting which contains much less risk compared to the above two examples. The business stakeholders, however, may be skeptical with regards to the model and/or its decisions or predictions. How can you help them understand the model, its behavior and decision-making, and thus ensure their trust and buy-in?