Understanding Deep Learning

Team

Niklas Penzel, Christian ReimersPaul Bodesheim

Motivation

Deep neural networks are tremendously successful in many applications, but end-to-end trained networks often result in hard to understand black-box classifiers or predictors. In this Project, we face this problem. We develop methods, to determine whether a feature is relevant to the decision of a deep neural network. Further, we develop methods to influence which features will be considered by a neural network during training. This can help to learn more robust, trustworthy and fair deep neural networks.
To utilize the power of deep learning in other fields of research, strong predictions are often not enough. Together with the Climate Informatics Group, we developed methods for unsupervised dimensionality reduction in climate data.
In this project, we employ Causal Inference as a method to model supervised learning processes. This allows us to determine which feature is relevant to a deep neural network.

Understanding which Features are Relevant to a Deep Neural Network

One research direction in this project is to determine which feature is relevant to a deep neural network. To determine which feature is considered by a deep neural network is can help us understand problems better and can help us to understand which situations will be challenging for a trained deep neural network. Especially important is this knowledge in safety- and security-critical tasks like autonomous driving or medical tasks and in situations of algorithmic fairness.

We presented a solution based on Causal Inference. To this end, we frame supervised learning as a structural causal model. In this framework, we can determine whether a feature is used by the deep neural network using a conditional independence test. We test the independence between the feature of interest and the prediction of the deep neural network given the ground truth label for the prediction. This method has the main advantage that it can handle features that are not regions in the input but abstract features of the input, such as symmetry or relative positioning.

In our publications, we demonstrated these methods for toy examples which clearly highlight its advantages and also for real-live datasets. We demonstrated that it can be used to compare classifiers without data, that it can verify classifiers in medical tasks, and that it can help us understand the behavior of trained classifiers.

Publications

2021
Conditional Dependence Tests Reveal the Usage of ABCD Rule Features and Bias Variables in Automatic Skin Lesion Classification
Christian Reimers and Niklas Penzel and Paul Bodesheim and Jakob Runge and Joachim Denzler.
CVPR ISIC Skin Image Analysis Workshop (CVPR-WS). Pages 1810-1819. 2021.
[bibtex] [pdf] [web] [abstract]
2020
Determining the Relevance of Features for Deep Neural Networks
Christian Reimers and Jakob Runge and Joachim Denzler.
European Conference on Computer Vision. Pages 330-346. 2020.
[bibtex] [abstract]
2019
Using Causal Inference to Globally Understand Black Box Predictors Beyond Saliency Maps
Christian Reimers and Jakob Runge and Joachim Denzler.
International Workshop on Climate Informatics (CI). 2019. DOI: 10.5065/y82j-f154
[bibtex] [pdf] [abstract]