Laines Schmalwasser, M.Sc.

Address: | Computer Vision Group |
Department of Mathematics and Computer Science | |
Friedrich Schiller University of Jena | |
Ernst-Abbe-Platz 2 | |
07743 Jena | |
Germany | |
E-mail: | laines (dot) schmalwasser (at) dlr (dot) de |
Room: | 1212 |
Links: |
Curriculum Vitae
since 2022 | Research Associate / PhD Student | |
Computer Vision Group, Friedrich Schiller University Jena & | ||
Data Analysis and Intelligence Group, DLR Institute of Data Science, Jena | ||
Topic: “Discover and Explore High-level, Human-interpretable Concepts to Improve | ||
the Interpretability of Neural Networks” | ||
2020 – 2021 | Research Assistant | |
DLR Institute of Data Science, Jena | ||
Topic: “Exploration, Comparision and Validation of Probability Models and its Data” | ||
2017 – 2020 | M.Sc. Computer Science | |
Friedrich Schiller University Jena | ||
Master Thesis: “How to Visualize Gaussian Mixture Models” | ||
2013 – 2017 | B.Sc. Computer Science | |
2015 – 2017: Friedrich Schiller University Jena | ||
2013 – 2015: Free University Berlin |
Research Interests
- Deep Learning
- Explainable AI
- Analyzing Model Training
Supervised Theses
- Christian Ickler: “Feature Steering via Multi-Task Learning”. Master thesis, 2024 (joint supervision with Jan Blunk)
Projects
LOKI: Collaboration of Aviation Operators and AI Systems
In the project Collaboration of Aviation Operators and AI Systems (LOKI), we analyse approaches to collaboration between humans and AI systems. An important building block for this is the investigation of metrics for state detection of the human partners. In the project, we develop prototypes of domain-specific AI systems, such as the digital co-pilot, and use them to develop guidelines for the design of the interface between users and AI systems.
Publications
2025
Laines Schmalwasser, Niklas Penzel, Joachim Denzler, Julia Niebling:
FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks.
Proceedings of the 42nd International Conference on Machine Learning (ICML). 2025. (Accepted at ICML 2025)
[bibtex] [abstract]
FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks.
Proceedings of the 42nd International Conference on Machine Learning (ICML). 2025. (Accepted at ICML 2025)
[bibtex] [abstract]
Concepts such as objects, patterns, and shapes are how humans understand the world. Building on this intuition, concept-based explainability methods aim to study representations learned by deep neural networks in relation to human-understandable concepts. Here, Concept Activation Vectors (CAVs) are an important tool and can identify whether a model learned a concept or not. However, the computational cost and time requirements of existing CAV computation pose a significant challenge, particularly in large-scale, high-dimensional architectures. To address this limitation, we introduce \methodname, a novel approach that accelerates the extraction of CAVs by up to \maxspeedup (on average \avgspeedup). %times. We provide a theoretical foundation for our approach and give concrete assumptions under which it is equivalent to established SVM-based methods. Our empirical results demonstrate that CAVs calculated with \methodname maintain similar performance while being more efficient and stable. In downstream applications, i.e., concept-based explanation methods, we show that \methodname can act as a replacement leading to equivalent insights. Hence, our approach enables previously infeasible investigations of deep models, which we demonstrate by tracking the evolution of concepts during model training.
2024
Laines Schmalwasser, Jakob Gawlikowski, Joachim Denzler, Julia Niebling:
Exploiting Text-Image Latent Spaces for the Description of Visual Concepts.
International Conference on Pattern Recognition (ICPR). Pages 109-125. 2024.
[bibtex] [doi] [abstract]
Exploiting Text-Image Latent Spaces for the Description of Visual Concepts.
International Conference on Pattern Recognition (ICPR). Pages 109-125. 2024.
[bibtex] [doi] [abstract]
Concept Activation Vectors (CAVs) offer insights into neural network decision-making by linking human friendly concepts to the model's internal feature extraction process. However, when a new set of CAVs is discovered, they must still be translated into a human understandable description. For image-based neural networks, this is typically done by visualizing the most relevant images of a CAV, while the determination of the concept is left to humans. In this work, we introduce an approach to aid the interpretation of newly discovered concept sets by suggesting textual descriptions for each CAV. This is done by mapping the most relevant images representing a CAV into a text-image embedding where a joint description of these relevant images can be computed. We propose utilizing the most relevant receptive fields instead of full images encoded. We demonstrate the capabilities of this approach in multiple experiments with and without given CAV labels, showing that the proposed approach provides accurate descriptions for the CAVs and reduces the challenge of concept interpretation.