Tutorial at ECML/PKDD 2020
18 September 2020 (morning), Virtual Event
Being able to explain the predictions of machine learning models is important in critical applications such as medical diagnosis or autonomous systems. The rise of deep nonlinear ML models has led to massive gains in terms of predictivity. Yet, we do not want such high accuracy to come at the expense of explainability. As a result, the field of Explainable AI (XAI) has emerged and has produced a collection of methods that are capable of explaining complex and diverse ML models.
In this tutorial, we give a structured overview of the basic approaches that have been proposed for XAI. In particular, we present motivations for such methods, their advantages/disadvantages and their theoretical underpinnings. We also show how they can be extended and applied in a way that they deliver maximum usefulness in real-world scenarios.
This tutorial targets core as well as applied ML researchers. Core machine learning researchers may be interested to learn about the connections between the different explanation methods, and the broad set of open questions, in particular, how to extend XAI to new ML algorithms. Applied ML researchers may find it interesting to understand the strong assumptions behind standard validation procedures, and why interpretability can be useful to further validate their model. They may also discover new tools to analyze their data and extract insight from it. Participants will benefit from having a technical background (computer science or engineering), and basic ML training.
|Wojciech Samek is heading the Machine Learning Group at Fraunhofer Heinrich Hertz Institute. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh from 2004 to 2010 and received a Ph.D. degree from the Technische Universität Berlin in 2014. His research interests are explainable AI, neural network compression and federated learning.|
|Grégoire Montavon is a Research Associate in the Machine Learning Group at TU Berlin. He received a Masters degree in Communication Systems from École Polytechnique Fédérale de Lausanne in 2009 and received a Ph.D. degree from the Technische Universität Berlin in 2013. His research focuses on techniques for interpreting ML models such as deep neural networks and kernels.|