Understanding Deep Learning

Contact

Niklas Penzel

Problem Statement

While modern deep learning models yield remarkable results when trained on abundant data, they fundamentally remain “black boxes.” This opacity poses a significant barrier to their wide-scale adoption in safety-critical sectors, such as medicine. To build domain experts’ trust in a model’s predictions, we must be able to explain its underlying decision-making process. Furthermore, explainability is a quality assurance measure: it allows us to validate that a trained model is relying on robust, real-world signals rather than spurious correlations, moving evaluation beyond simple performance metrics.

Our Approach

Our project tackles this opacity from two complementary angles:

  1. Global Explanations: Decoding the overarching representations and features learned by neural networks across an entire dataset.
  2. Local Interventional Explanations: Understanding causal drivers behind a neural network’s specific prediction for an individual input.

Across both approaches, our work is grounded in causal reasoning. By leveraging Structural Causal Models (SCMs) and Reichenbach’s Common Cause Principle, we aim to disentangle true causal relationships from mere statistical correlations.

To ensure these explanations are actually useful, we map them to human-understandable concepts. By translating complex neural network behaviors into the natural language and features used by domain experts, we bridge the gap between experts and end-users. We actively validate our approaches across various real-world application domains, including medical imaging and digital agriculture.

Publications

2026
Niklas Penzel, Daniel Scheliga, Hannes Oppermann, Patrick Mäder, Jens Haueisen, Joachim Denzler, Marco Seeland:
Model utility and explainability in federated learning - A case study in healthcare using fundus oculi datasets.
Journal of Biomedical Informatics. 177 : pp. 105010. 2026.
[bibtex] [web] [doi] [abstract]
Niklas Penzel, Joachim Denzler:
Locally Explaining Prediction Behavior via Gradual Interventions and Measuring Property Gradients.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Pages 7398-7408. 2026.
[bibtex] [web] [doi] [abstract]
2025
Jan Blunk, Paul Bodesheim, Joachim Denzler:
Adaptive Model Selection for Expanded Post Hoc Debiasing and Mitigating Varying Degrees of Spurious Correlations.
International Conference in Computer Analysis of Images and Patterns (CAIP). Pages 101-111. 2025.
[bibtex] [web] [doi] [abstract]
2024
Niklas Penzel, Gideon Stein, Joachim Denzler:
Reducing Bias in Pre-trained Models by Tuning while Penalizing Change.
International Conference on Computer Vision Theory and Applications (VISAPP). Pages 90-101. 2024.
[bibtex] [web] [doi] [abstract]
Tim Büchner, Niklas Penzel, Orlando Guntinas-Lichius, Joachim Denzler:
Facing Asymmetry - Uncovering the Causal Link between Facial Symmetry and Expression Classifiers using Synthetic Interventions.
Asian Conference on Computer Vision (ACCV). 2024.
[bibtex] [pdf] [web] [doi] [abstract]
Tim Büchner, Niklas Penzel, Orlando Guntinas-Lichius, Joachim Denzler:
The Power of Properties: Uncovering the Influential Factors in Emotion Classification.
International Conference on Pattern Recognition and Artificial Intelligence (ICPRAI). 2024.
[bibtex] [pdf] [web] [doi] [abstract]
2023
Niklas Penzel, Jana Kierdorf, Ribana Roscher, Joachim Denzler:
Analyzing the Behavior of Cauliflower Harvest-Readiness Models by Investigating Feature Relevances.
ICCV Workshop on Computer Vision in Plant Phenotyping and Agriculture (CVPPA). Pages 572-581. 2023.
[bibtex] [pdf] [abstract]
2022
Niklas Penzel, Christian Reimers, Paul Bodesheim, Joachim Denzler:
Investigating Neural Network Training on a Feature Level using Conditional Independence.
ECCV Workshop on Causality in Vision (ECCV-WS). Pages 383-399. 2022.
[bibtex] [pdf] [doi] [abstract]
2021
Christian Reimers, Niklas Penzel, Paul Bodesheim, Jakob Runge, Joachim Denzler:
Conditional Dependence Tests Reveal the Usage of ABCD Rule Features and Bias Variables in Automatic Skin Lesion Classification.
CVPR ISIC Skin Image Analysis Workshop (CVPR-WS). Pages 1810-1819. 2021.
[bibtex] [pdf] [web] [abstract]
Christian Reimers, Paul Bodesheim, Jakob Runge, Joachim Denzler:
Conditional Adversarial Debiasing: Towards Learning Unbiased Classifiers from Biased Data.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 48-62. 2021.
[bibtex] [pdf] [doi] [abstract]
Niklas Penzel, Christian Reimers, Clemens-Alexander Brust, Joachim Denzler:
Investigating the Consistency of Uncertainty Sampling in Deep Active Learning.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 159-173. 2021.
[bibtex] [pdf] [web] [doi] [abstract]
2020
Christian Reimers, Jakob Runge, Joachim Denzler:
Determining the Relevance of Features for Deep Neural Networks.
European Conference on Computer Vision. Pages 330-346. 2020.
[bibtex] [abstract]
2019
Christian Reimers, Jakob Runge, Joachim Denzler:
Using Causal Inference to Globally Understand Black Box Predictors Beyond Saliency Maps.
International Workshop on Climate Informatics (CI). 2019.
[bibtex] [pdf] [doi] [abstract]