AI can embed human and societal bias and deploy them at scale. Many algorithms are now being reexamined due to illegal bias. How do you remove bias & discrimination in the machine learning pipeline? And how can you trust model predictions?
In many applications, trust in an AI system will come from its ability to ‘explain itself.’ But when it comes to understanding and explaining the inner workings of an algorithm, one size does not fit all. Different stakeholders require explanations for different purposes and objectives, and explanations must be tailored to their needs. While a regulator will aim to understand the system as a whole and probe into its logic, consumers affected by a specific decision will be interested only in factors impacting their case – for example, in a loan processing application, they will expect an explanation for why the request was denied and want to understand what changes could lead to approval.
In this talk you will learn about debiasing techniques and ways to explain models to different users. Two open source projects are introduced:
- AI Fairness 360 (AIF360) brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from researchers across industry & academia.
- AI Explainability 360 (AIX360) includes algorithms that span the different dimensions of ways of explaining along with proxy explainability metrics.
Learn about explainable workflows using open source software with Python. From data ingestion, cleaning, modelling, and deployment, explainability and trust are at the forefront of enterprise data science initiatives. Learn the best practices for codifying and relaying explainable data science to stakeholders, management, and the end-user in a resilient and portable fashion.
Abstract & Bio
Building Fair and Explainable AI Pipelines