Niklas Penzel, M.Sc.

Address: | Computer Vision Group Department of Mathematics and Computer Science Friedrich Schiller University of Jena Ernst-Abbe-Platz 2 07743 Jena Germany |
Phone: | +49 (0) 3641 9 46335 |
E-mail: | niklas (dot) penzel (at) uni-jena (dot) de |
Room: | 1224 |
Links: |
Curriculum Vitae
Since Dec. 2020 | Research Associate at the Computer Vision Group, Friedrich Schiller University Jena |
---|---|
2020 | Master Thesis: “The Bias Uncertainty Sampling introduced into an Active Learning System” |
2018-2020 | M.Sc. in Computer Science at the Friedrich Schiller University Jena |
2018 | Bachelor Thesis: “Lebenslanges Lernen von Klassifikationssystemen ohne Vorwissen und mit intelligenter Datenhaltung” (Lifelong Learning of Classification Systems without Previous Knowledge and with Smart Data Management) |
2015-2018 | B.Sc. in Computer Science at the Friedrich Schiller University Jena |
Research Interests
- Active Learning
- Lifelong Learning
- Deep Learning
- Super Resolution
Publications
2021
Conditional Dependence Tests Reveal the Usage of ABCD Rule Features and Bias Variables in Automatic Skin Lesion Classification
Christian Reimers and Niklas Penzel and Paul Bodesheim and Jakob Runge and Joachim Denzler.
CVPR ISIC Skin Image Analysis Workshop (CVPR-WS). Pages 1810-1819. 2021.
[bibtex] [pdf] [web] [abstract]
Christian Reimers and Niklas Penzel and Paul Bodesheim and Jakob Runge and Joachim Denzler.
CVPR ISIC Skin Image Analysis Workshop (CVPR-WS). Pages 1810-1819. 2021.
[bibtex] [pdf] [web] [abstract]
Skin cancer is the most common form of cancer, and melanoma is the leading cause of cancer related deaths. To improve the chances of survival, early detection of melanoma is crucial. Automated systems for classifying skin lesions can assist with initial analysis. However, if we expect people to entrust their well-being to an automatic classification algorithm, it is important to ensure that the algorithm makes medically sound decisions. We investigate this question by testing whether two state-of-the-art models use the features defined in the dermoscopic ABCD rule or whether they rely on biases. We use a method that frames supervised learning as a structural causal model, thus reducing the question whether a feature is used to a conditional dependence test. We show that this conditional dependence method yields meaningful results on data from the ISIC archive. Furthermore, we find that the selected models incorporate asymmetry, border and dermoscopic structures in their decisions but not color. Finally, we show that the same classifiers also use bias features such as the patient's age, skin color or the existence of colorful patches.
Investigating the Consistency of Uncertainty Sampling in Deep Active Learning
Niklas Penzel and Christian Reimers and Clemens-Alexander Brust and Joachim Denzler.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 159-173. 2021.
[bibtex] [pdf] [web] [abstract]
Niklas Penzel and Christian Reimers and Clemens-Alexander Brust and Joachim Denzler.
DAGM German Conference on Pattern Recognition (DAGM-GCPR). Pages 159-173. 2021.
[bibtex] [pdf] [web] [abstract]
Uncertainty sampling is a widely used active learning strategy to select unlabeled examples for annotation. However, previous work hints at weaknesses of uncertainty sampling when combined with deep learning, where the amount of data is even more significant. To investigate these problems, we analyze the properties of the latent statistical estimators of uncertainty sampling in simple scenarios. We prove that uncertainty sampling converges towards some decision boundary. Additionally, we show that it can be inconsistent, leading to incorrect estimates of the optimal latent boundary. The inconsistency depends on the latent class distribution, more specifically on the class overlap. Further, we empirically analyze the variance of the decision boundary and find that the performance of uncertainty sampling is also connected to the class regions overlap. We argue that our findings could be the first step towards explaining the poor performance of uncertainty sampling combined with deep models.