Explainable AI for Deep Networks
Basics and Extensions

Tutorial at ECML-PKDD 2020

18 September 2020, 09:00-13:00 CEST, Virtual Event



Access to the tutorial is via the ECML-PKDD website.

Description

Being able to explain the predictions of machine learning models is important in critical applications such as medical diagnosis or autonomous systems. The rise of deep nonlinear ML models has led to massive gains in terms of predictivity. Yet, we do not want such high accuracy to come at the expense of explainability. As a result, the field of Explainable AI (XAI) has emerged and has produced a collection of methods that are capable of explaining complex and diverse ML models.

In this tutorial, we give a structured overview of the basic approaches that have been proposed for XAI in the context of Deep Neural Networks (DNNs). In particular, we present motivations for such methods, their advantages/disadvantages and their theoretical underpinnings. We also show how they can be extended and applied in a way that they deliver maximum usefulness in real-world scenarios.

This tutorial targets core as well as applied ML researchers. Core machine learning researchers may be interested to learn about the connections between the different explanation methods, and the broad set of open questions, in particular, how to extend XAI to new ML algorithms. Applied ML researchers may find it interesting to understand the strong assumptions behind standard validation procedures, and why interpretability can be useful to further validate their model. They may also discover new tools to analyze their data and extract insight from it. Participants will benefit from having a technical background (computer science or engineering), and basic ML training.

Instructors

Wojciech Samek is heading the Machine Learning Group at Fraunhofer Heinrich Hertz Institute. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh from 2004 to 2010 and received a Ph.D. degree from the Technische Universität Berlin in 2014. His research interests are explainable AI, neural network compression and federated learning.

Grégoire Montavon is a Research Associate in the Machine Learning Group at TU Berlin. He received a Masters degree in Communication Systems from École Polytechnique Fédérale de Lausanne in 2009 and received a Ph.D. degree from the Technische Universität Berlin in 2013. His research focuses on techniques for interpreting ML models such as deep neural networks and kernels.

Structure

Part 1: Introduction to XAI (WS) 09h00-09h45
Live Q&A:
09h45-09h55
Part 2: Methods for Explaining DNNs (GM) 10h00-10h45
Live Q&A:
10h45-10h55
Part 3: Implementation, Theory, Evaluation, Extensions (GM) 11h00-11h45
Live Q&A:
11h45-11h55
Part 4: Applications (WS) 12h00-12h45
Live Q&A:
12h45-12h55

Course Material

Slides