Understanding Deep Learning
Team
Niklas Penzel, Christian Reimers, Paul Bodesheim
Motivation
Deep neural networks are tremendously successful in many applications, but end-to-end trained networks often result in hard to understand black-box classifiers or predictors. In this Project, we face this problem. We develop methods, to determine whether a feature is relevant to the decision of a deep neural network. Further, we develop methods to influence which features will be considered by a neural network during training. This can help to learn more robust, trustworthy and fair deep neural networks.
To utilize the power of deep learning in other fields of research, strong predictions are often not enough. Together with the Climate Informatics Group, we developed methods for unsupervised dimensionality reduction in climate data.
In this project, we employ Causal Inference as a method to model supervised learning processes. This allows us to determine which feature is relevant to a deep neural network.
Understanding which Features are Relevant to a Deep Neural Network
One research direction in this project is to determine which feature is relevant to a deep neural network. To determine which feature is considered by a deep neural network is can help us understand problems better and can help us to understand which situations will be challenging for a trained deep neural network. Especially important is this knowledge in safety- and security-critical tasks like autonomous driving or medical tasks and in situations of algorithmic fairness.
We presented a solution based on Causal Inference. To this end, we frame supervised learning as a structural causal model. In this framework, we can determine whether a feature is used by the deep neural network using a conditional independence test. We test the independence between the feature of interest and the prediction of the deep neural network given the ground truth label for the prediction. This method has the main advantage that it can handle features that are not regions in the input but abstract features of the input, such as symmetry or relative positioning.
In our publications, we demonstrated these methods for toy examples which clearly highlight its advantages and also for real-live datasets. We demonstrated that it can be used to compare classifiers without data, that it can verify classifiers in medical tasks, and that it can help us understand the behavior of trained classifiers.
Publications
Reducing Bias in Pre-trained Models by Tuning while Penalizing Change.
International Conference on Computer Vision Theory and Applications (VISAPP). Pages 90-101. 2024.
[bibtex] [web] [doi] [abstract]
Facing Asymmetry - Uncovering the Causal Link between Facial Symmetry and Expression Classifiers using Synthetic Interventions.
Asian Conference on Computer Vision (ACCV). 2024.
[bibtex] [pdf] [web] [doi] [abstract]
The Power of Properties: Uncovering the Influential Factors in Emotion Classification.
International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI). 2024.
[bibtex] [web] [doi] [abstract]
Analyzing the Behavior of Cauliflower Harvest-Readiness Models by Investigating Feature Relevances.
ICCV Workshop on Computer Vision in Plant Phenotyping and Agriculture (CVPPA). Pages 572-581. 2023.
[bibtex] [pdf] [abstract]
Investigating Neural Network Training on a Feature Level using Conditional Independence.
ECCV Workshop on Causality in Vision (ECCV-WS). Pages 383-399. 2022.
[bibtex] [pdf] [doi] [abstract]
Conditional Dependence Tests Reveal the Usage of ABCD Rule Features and Bias Variables in Automatic Skin Lesion Classification.
CVPR ISIC Skin Image Analysis Workshop (CVPR-WS). Pages 1810-1819. 2021.
[bibtex] [pdf] [web] [abstract]
Conditional Adversarial Debiasing: Towards Learning Unbiased Classifiers from Biased Data.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 48-62. 2021.
[bibtex] [pdf] [doi] [abstract]
Investigating the Consistency of Uncertainty Sampling in Deep Active Learning.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 159-173. 2021.
[bibtex] [pdf] [web] [doi] [abstract]
Determining the Relevance of Features for Deep Neural Networks.
European Conference on Computer Vision. Pages 330-346. 2020.
[bibtex] [abstract]
Using Causal Inference to Globally Understand Black Box Predictors Beyond Saliency Maps.
International Workshop on Climate Informatics (CI). 2019.
[bibtex] [pdf] [doi] [abstract]