[ Invited Speakers |
Accepted Papers |
Call for Papers |
While machine learning models have reached impressively high predictive accuracy, they are often perceived as black-boxes. In sensitive applications such as medical diagnosis or self-driving cars, the reliance of the model on the right features must be guaranteed. One would like to be able to interpret what the ML model has learned in order to identify biases and failure modes and improve models accordingly. Interpretability is also needed in the sciences, where understanding how the ML model relates the multiple physical and biological variables is a prerequisite for building meaningful scientific hypotheses.
The present workshop aims to review recent techniques and establish new theoretical foundations for interpreting and understanding deep learning models. However, it will not stop at the methodological level, but also address the “now what?” question, where we aim to take the next step by exploring and extending practical usefulness. The workshop will have speakers from various application domains (computer vision, NLP, neuroscience, medicine), it will provide an opportunity for participants to learn from each other and initiate new interdisciplinary collaborations.
For background material on the topic, see our reading list
A edited book based on some of the workshop contributions as well as invited contributions is now available.
|Session 1: Foundations|
|08.15 - 08.45||Opening Remarks||Klaus-Robert Müller|
|08.45 - 09.15||Invited Talk 1||Been Kim||Interpretability for data and neutral network|
|09.15 - 09.45||Invited Talk 2||Dhruv Batra|
|09.45 - 10.30||Methods Talks (3x15 min)||Grégoire Montavon|
Michael Y Tsang
|10.30 - 11.00||Coffee Break|
|11.00 - 11:15||Methods Talks (1x15 min)||Pieter-Jan Kindermans|
|11:15 - 11.45||Invited Talk 3||Sepp Hochreiter|
|11.45 - 12.15||Posters session|
|Session 2: Applications|
|13.15 - 13.45||Posters session|
|13.45 - 14.15||Invited Talk 4||Anh Nguyen||Understanding Neural Networks via Feature Visualization|
|14.15 - 14.45||Invited Talk 5||Honglak Lee||Hierarchical approaches for RL and generative models|
|14.45 - 15:00||Application Talk (1x15 min)||Wojciech Samek|
|15.00 - 15.30||Coffee Break|
|15.30 - 15:45||Application Talk (1x15 min)||Samuel Greydanus|
|15.45 - 16.15||Invited Talk 6||Rich Caruana|
|16.15 - 16.45||Invited Talk 7||Trevor Darrell||Interpreting and Justifying Visual Decisions and Actions|
|16.45 - 17:00||Closing Remarks||Lars Kai Hansen|
Call for Papers
We call for papers on the following topics: (1) interpretability of deep neural networks, (2) analysis and comparison of state-of-the-art models, (3) formalization of the interpretability problem, (4) interpretability for making ML socially acceptable, and (5) applications of interpretability.
Submissions are required to stick to the NIPS format
Papers are limited to eight pages (excluding references) and will go through a review process. A selection of accepted papers together with the invited contributions will be part of an edited book
at Springer LNCS.
| |Submission deadline 01 November, 2017
| |Author notification 10 November, 2017
| |Camera-ready version 24 November, 2017
|Workshop ||09 December, 2017|
- Klaus-Robert Müller (TU Berlin)
- Andrea Vedaldi (University of Oxford)
- Lars Kai Hansen (Technical University of Denmark)
- Wojciech Samek (Fraunhofer HHI)
- Grégoire Montavon (TU Berlin)