Abstract

Enabling responsible development of artificial intelligent technologies is one of the major challenges we face as the field moves from research to practice. Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learning in many current and future real-world applications.  Now there are calls from across the industry (academia, government, and industry leaders) for technology creators to ensure that AI is used only in ways that benefit people and “to engineer responsibility into the very fabric of the technology.”   Overcoming these challenges and enabling responsible development is essential to ensure a future where AI and machine learning can be widely used. 

In this talk we will cover six principles of development and deployment of trustworthy AI systems:  Four core principles of fairness, reliability/safety, privacy/security, and inclusiveness, underpinned by two foundational principles of transparency and accountability. We present on how each principle plays a key role in responsible AI and what it means to take these principles from theory to practice. We will cover open source products across different area of responsible AI umbrella, particularly transparency and interpretability for tabular and text data and AI fairness that aims to empower researchers, data scientists, and machine learning developers to take a significant step forward in this space, building trust between users and AI systems. Responsible AI is an umbrella term for many themes associated with the intersection of ethics and AI.  One reasonable enumeration is Microsoft’s 6 Principles for AI development:  Four core principles of fairness, reliability/safety, privacy/security, and inclusiveness, underpinned by two foundational principles of transparency and accountability. 

For this presentation, we focus on Transparency (Interpretability), Fairness and Inclusiveness, and Privacy as major principles of responsible AI and cover best practices and state-of-the-art open source toolkits and offerings that target researchers, data scientists, machine learning developers, and business stakeholders to be able to build trustable, more transparent AI systems. Attendees will leave the session with the basic understanding of responsible AI principles, best practices and open source tools around responsible development and deployment of AI systems. They will be able to incorporate the introduced tools and products in their machine learning life cycle, running them on their previously-trained models to understand the factors that went into their model predictions, and verify their model fairness across protected attributes and mitigate the existing bias.

Course curriculum

  • 1

    Responsible AI – State of the Art and Future Directions

    • Tutorial

Instructors

Instructor Bio:

Technical Program Manager | Microsoft

Mehrnoosh Sameki

Mehrnoosh Sameki is a technical program manager at Microsoft responsible for leading the product efforts on machine learning transparency within the Azure Machine Learning platform. Prior to Microsoft, she was a data scientist in an eCommerce company, Rue Gilt Groupe, incorporating data science and machine learning in retail space to drive revenue and enhance personalized shopping experiences of customers and prior to that, she completed a PhD degree in computer science at Boston University. In her spare time, she enjoys trying new food recipes, watching classic movies and documentaries, and reading about interior design and house decoration.

Program Manager. | Microsoft

Minsoo Thigpen

Minsoo is a Program Manager in the Responsible AI team at Microsoft focusing on building out offerings for the OSS Interpretability Toolkit and its integration into Azure Machine Learning Platform. She recently graduated from Microsoft's pilot AI rotation program as one of three first PMs in the first cohort working on a variety of ML/AI application projects within Microsoft to accelerate its adoption of the AI-first initiative. She has Bachelor's degrees in Applied Math and Painting from Brown University and Rhode Island School of Design (RISD). Coming from an interdisciplinary background with experience in building models and applications, analyzing data, and designing UX, she is looking to work in the intersection of AI/ML, design, and social sciences to empower data practitioners to work ethically and responsibly end-to-end.

Data and Applied Scientist | Microsoft

Ehi Nosakhare, PhD

Ehi Nosakhare is a Data and Applied Scientist in the AI development and acceleration program at Microsoft. She designs, develops and leads the implementation of machine learning (ML) solutions in application projects across Microsoft’s products and services. She is currently focused on developing a toolkit that enables text interpretability and machine learning transparency more broadly. Prior to Microsoft, she completed a Ph.D. in Electrical Engineering and Computer Science (EECS) from the Massachusetts Institute of Technology (MIT). She is very passionate about using ML to solve real-world problems and studying the ethical implications of ML/AI. In her spare time, she enjoys reading and re-learning to play the cello