Niklas Penzel, M.Sc.

Address: | Computer Vision Group Department of Mathematics and Computer Science Friedrich Schiller University of Jena Ernst-Abbe-Platz 2 07743 Jena Germany |
Phone: | +49 (0) 3641 9 46335 |
E-mail: | niklas (dot) penzel (at) uni-jena (dot) de |
Room: | 1224 |
Links: |
Curriculum Vitae
Since Dec. 2020 | Research Associate at the Computer Vision Group, Friedrich Schiller University Jena |
---|---|
2020 | Master Thesis: “The Bias Uncertainty Sampling introduced into an Active Learning System” |
2018-2020 | M.Sc. in Computer Science at the Friedrich Schiller University Jena |
2018 | Bachelor Thesis: “Lebenslanges Lernen von Klassifikationssystemen ohne Vorwissen und mit intelligenter Datenhaltung” (Lifelong Learning of Classification Systems without Previous Knowledge and with Smart Data Management) |
2015-2018 | B.Sc. in Computer Science at the Friedrich Schiller University Jena |
Research Interests
- Active Learning
- Lifelong Learning
- Deep Learning
- Super Resolution
Publications
2022
Investigating Neural Network Training on a Feature Level using Conditional Independence
Niklas Penzel and Christian Reimers and Paul Bodesheim and Joachim Denzler.
ECCV Workshop on Causality in Vision (ECCV-WS). 2022. (accepted)
[bibtex] [abstract]
Niklas Penzel and Christian Reimers and Paul Bodesheim and Joachim Denzler.
ECCV Workshop on Causality in Vision (ECCV-WS). 2022. (accepted)
[bibtex] [abstract]
There are still open questions about how the learned representations of deep models change during the training process. Understanding this process could aid in validating the training. Towards this goal, previous works analyze the training in the mutual information plane. We use a different approach and base our analysis on a method built on Reichenbach’s common cause principle. Using this method, we test whether the model utilizes information contained in human-defined features. Given such a set of features, we investigate how the relative feature usage changes throughout the training process. We analyze mul- tiple networks training on different tasks, including melanoma classifica- tion as a real-world application. We find that over the training, models concentrate on features containing information relevant to the task. This concentration is a form of representation compression. Crucially, we also find that the selected features can differ between training from-scratch and finetuning a pre-trained network.
2021
Conditional Dependence Tests Reveal the Usage of ABCD Rule Features and Bias Variables in Automatic Skin Lesion Classification
Christian Reimers and Niklas Penzel and Paul Bodesheim and Jakob Runge and Joachim Denzler.
CVPR ISIC Skin Image Analysis Workshop (CVPR-WS). Pages 1810-1819. 2021.
[bibtex] [pdf] [web] [abstract]
Christian Reimers and Niklas Penzel and Paul Bodesheim and Jakob Runge and Joachim Denzler.
CVPR ISIC Skin Image Analysis Workshop (CVPR-WS). Pages 1810-1819. 2021.
[bibtex] [pdf] [web] [abstract]
Skin cancer is the most common form of cancer, and melanoma is the leading cause of cancer related deaths. To improve the chances of survival, early detection of melanoma is crucial. Automated systems for classifying skin lesions can assist with initial analysis. However, if we expect people to entrust their well-being to an automatic classification algorithm, it is important to ensure that the algorithm makes medically sound decisions. We investigate this question by testing whether two state-of-the-art models use the features defined in the dermoscopic ABCD rule or whether they rely on biases. We use a method that frames supervised learning as a structural causal model, thus reducing the question whether a feature is used to a conditional dependence test. We show that this conditional dependence method yields meaningful results on data from the ISIC archive. Furthermore, we find that the selected models incorporate asymmetry, border and dermoscopic structures in their decisions but not color. Finally, we show that the same classifiers also use bias features such as the patient's age, skin color or the existence of colorful patches.
Investigating the Consistency of Uncertainty Sampling in Deep Active Learning
Niklas Penzel and Christian Reimers and Clemens-Alexander Brust and Joachim Denzler.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 159-173. 2021. DOI: 10.1007/978-3-030-92659-5_10
[bibtex] [pdf] [web] [abstract]
Niklas Penzel and Christian Reimers and Clemens-Alexander Brust and Joachim Denzler.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 159-173. 2021. DOI: 10.1007/978-3-030-92659-5_10
[bibtex] [pdf] [web] [abstract]
Uncertainty sampling is a widely used active learning strategy to select unlabeled examples for annotation. However, previous work hints at weaknesses of uncertainty sampling when combined with deep learning, where the amount of data is even more significant. To investigate these problems, we analyze the properties of the latent statistical estimators of uncertainty sampling in simple scenarios. We prove that uncertainty sampling converges towards some decision boundary. Additionally, we show that it can be inconsistent, leading to incorrect estimates of the optimal latent boundary. The inconsistency depends on the latent class distribution, more specifically on the class overlap. Further, we empirically analyze the variance of the decision boundary and find that the performance of uncertainty sampling is also connected to the class regions overlap. We argue that our findings could be the first step towards explaining the poor performance of uncertainty sampling combined with deep models.