Description

In this talk, we will discuss new developments made to extend SHAP, a blackbox-explainer based on game-theoretic methods, to better support non-tabular data scenarios, including Image-to-Text, Image-to-Multiclass, Text-to-Text, and Text-to-Multiclass. This includes new tight integration with popular libraries like Transformers and MLFlow.

Main learning points:

1. How can I use SHAP for non-tabular data scenarios.

2. How can I benchmark explainability methods?

3. How can I use SHAP in my AI development workflows?

Instructor's Bio

Michael Amoako

Program Manager (MAIDAP) at Microsoft

Michael is a Program Manager in Microsoft’s AI Development Acceleration Program. He is involved in Microsoft’s Responsible AI Strategy efforts, a member of the AI Ethics Fairness and Inclusion group, and has worked on projects including explainability techniques in AI, Natural Language Processing, and Computer Vision.

Scott Lundberg

 Senior Researcher at Microsoft Research and an Affiliate Assistant Professor at the University of Washington

Scott is a Senior Researcher at Microsoft Research and an Affiliate Assistant Professor at the University of Washington. His work focuses on explainable artificial intelligence and its application to problems in medicine, healthcare, and finance.

Vivek Chettiar

Software Engineer (MAIDAP) at Microsoft

Vivek is a Software Engineer in Microsoft’s AI Development Acceleration Program. He received his Masters of Engineering in Computer Science from Cornell University, and has worked on a wide-range of AI and large-scale data projects.


Use discount code WEBINAR2021 to get your Virtual ODSC East 2021 pass with an additional 20% OFF

Webinar

  • 1

    Model Explainability with SHAP for Non-Tabular Data

    • AI+ Training

    • Webinar recording

    • AI+ Subscription Plans

    • Let's discuss more about Model Explainability here