Understand what we mean by ML explainability
What types of explainability exist
When to apply explainability (and when not)
Build an understanding of how to apply explainability approaches from RuleFit, Partial dependecy plots, Individual Conditional expectations to Global surrogate models, LIME and Shapley values.
Violeta Misheva, PhD
Data Scientist | ABN AMRO Bank N.V.
In the course, we will use a classic machine learning dataset and explain the decision and predictions of a black-box model. We will start with a brief theoretical introduction of the different approaches to explainability and what each of them is best suited for. The majority of the session will be a hands-on demonstration of many of these approaches.
In the first part, we will talk about visual methods to explain a model. We will construct partial dependency plots and individual conditional expectations. They are a valuable and quite intuitive way to gain an initial understanding of the model's behavior.
In the second part, we will look at a couple of approaches to explain the global model behavior, such as feature permutations and global surrogate models. We will discuss in what settings they are best applicable, as well as highlight some limitations.
Last but not least, in the third part, we will walk through some local approaches to explainability. These methods attempt to explain the decision of the model for each instance in a data set and are therefore in high demand in many business settings and domains. We will compare discuss methods such as LIME and Shapley values.
The landscape of XAI is developing rapidly and this is just a selection of some of the more promising and popular current approaches and packages. By the end of the course, you will have a good grasp of what we mean by explainability, what ways there are to provide explanations of models, and most of all, what are some more mature and stable packages you can use to explain your machine learning models in Python.
Intermediate Machine learning, beginner to intermediate Python