Grégoire Montavon

Presentation:

Understanding the Decisions of Deep Neural Networks

The rapid growth of dataset size and computational power offers new opportunities for extracting complex nonlinear relations in real-world data. Deep neural networks (DNN) have proven very efficient at converting large amounts of data into complex highly structured predictive models. In practice, it is not only important to learn these models up to high accuracy, one must also ensure that the learned statistical relations are meaningful and not based on data collection artefacts. In this talk, practical examples will be given to illustrate the need to gain interpretable insight into the model, specifically, which input features a deep neural network or a kernel machine uses to reach a certain decision. A number of techniques will then be presented, in particular, layer-wise relevance propagation (LRP) which produces interpretable explanations of DNN decisions. Finally, some recent work in explainable ML will be discussed, such as techniques to extract summaries of overall ML behavior, approaches to systematically extend explainability to ML models beyond DNN classifiers, as well as the need to objectively assess the quality of produced explanations.

Back to the Summer School 2019 overview

TU
Universität
Max
Leibnitz-Institut
Helmholtz
Hemholtz
Institut
Fraunhofer-Institut
Fraunhofer-Institut
Max-Planck-Institut
Institute
Max-Plank-Institut