Explainable AI for Deep Networks
Basics and Extensions

Tutorial at ECML/PKDD 2020

14/18 September 2020 in Ghent, Belgium

Description

Being able to explain the predictions of machine learning models is important in critical applications such as medical diagnosis or autonomous systems. The rise of deep nonlinear ML models has led to massive gains in terms of predictivity. Yet, we do not want such high accuracy to come at the expense of explainability. As a result, the field of Explainable AI (XAI) has emerged and has produced a collection of methods that are capable of explaining complex and diverse ML models.

In this tutorial, we give a structured overview of the basic approaches that have been proposed for XAI. In particular, we present motivations for such methods, their advantages/disadvantages and their theoretical underpinnings. We also show how they can be extended and applied in a way that they deliver maximum usefulness in real-world scenarios. The tutorial interleaves the presented topics with (1) hands-on sessions, where participants can experiment with XAI code, and (2) showcases, where a selection of successful XAI applications are highlighted.

This tutorial targets core as well as applied ML researchers. Core machine learning researchers may be interested to learn about the connections between the different explanation methods, and the broad set of open questions, in particular, how to extend XAI to new ML algorithms. Applied ML researchers may find it interesting to understand the strong assumptions behind standard validation procedures, and why interpretability can be useful to further validate their model. They may also discover new tools to analyze their data and extract insight from it. Participants will benefit from having a technical background (computer science or engineering), and basic ML training.

Instructors

Wojciech Samek is heading the Machine Learning Group at Fraunhofer Heinrich Hertz Institute. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh from 2004 to 2010 and received a Ph.D. degree from the Technische Universität Berlin in 2014. His research interests are explainable AI, neural network compression and federated learning.

Grégoire Montavon is a Research Associate in the Machine Learning Group at TU Berlin. He received a Masters degree in Communication Systems from École Polytechnique Fédérale de Lausanne in 2009 and received a Ph.D. degree from the Technische Universität Berlin in 2013. His research focuses on techniques for interpreting ML models such as deep neural networks and kernels.

Schedule

Part 1: XAI Basics
0h00-0h30Motivations: Black-box models and the "Clever Hans" effect(WS)
0h30-1h15Explainable AI: methods for explaining deep neural networks(GM)
1h15-1h25Unifying views on explanation methods(GM)
1h25-1h45[Hands-On] Implementing XAI methods(GM)
1h45-2h15Coffee break
2h15-2h35[Hands-On] Using XAI to build a robust face classifier(WS)
Part 2: Extending XAI
2h35-2h55Explaining beyond deep networks(GM)
2h55-3h10Explaining beyond single-feature attributions(GM)
3h10-3h30Explaining beyond individual predictions(WS)
3h30-3h45[Showcase] Debugging large datasets(WS)
3h45-4h00[Showcase] XAI-based mining of scientific data(WS)

Course Material

The course material will be uploaded on September 1, 2020.