Hui Yu, M.Sc.

| Address: | Computer Vision Group |
| Department of Mathematics and Computer Science | |
| Friedrich Schiller University of Jena | |
| Inselplatz 5 | |
| 07743 Jena | |
| Germany | |
| Phone: | +49 (0) 3641 9 46410 |
| E-mail: | hui (dot) yu (at) uni-jena (dot) de |
| Room: | 3023 |
| Links: |
Curriculum Vitae
| since 2025 | Research Associate / PhD Student | |
| Computer Vision Group, Friedrich Schiller University Jena | ||
| Topic: “Recording the Biodiversity of Moths with Automated Camera Traps | ||
| and Artificial Intelligence” | ||
| 2023 – 2025 | Software Developer | |
| Isarsoft GmbH | ||
| Focus: Computer Vision, Real-time Object Detection | ||
| 2019 – 2022 | M.Sc. Medical Imaging and Data Processing | |
| Friedrich-Alexander-Universität Erlangen-Nürnberg | ||
| Master Thesis: “Unstained White Blood Cells Classification Using Deep Learning” | ||
| 2015 – 2019 | B.Sc. Computer Science and Technology | |
| China University of Geosciences | ||
| Bachelor Thesis: “Simulation of Beidou Satellite Navigation System with Unity3D” |
Research Interests
- Insects Localization and Detection
- Fine-grained Recognition and Classification
- Promptable Models
- Knowledge Integration and Biodiversity Analysis
Publications
2025
Hui Yu, Joachim Denzler, Dennis Böttger, Gunnar Brehm, Paul Bodesheim:
Exploiting Unlabeled Images via Pseudo-Labelling and Paste-In Augmentation for Insect Localisation in Automated Monitoring.
International Workshop Series on Camera Traps, AI, \& Ecology (CamTrapAI). 2025.
[bibtex] [abstract]
Exploiting Unlabeled Images via Pseudo-Labelling and Paste-In Augmentation for Insect Localisation in Automated Monitoring.
International Workshop Series on Camera Traps, AI, \& Ecology (CamTrapAI). 2025.
[bibtex] [abstract]
Insect monitoring using an automated deep learning pipeline has become increasingly important in understanding the crisis of insect decline. Advanced model architectures trained with high-resolution images are essential to ensure the quality of insect localisation and species identification. Recent methods struggle with limited annotated data, which requires time-consuming manual labelling for bounding boxes and domain expert-level knowledge for insect categorisation. In this paper, we present a comprehensive benchmark of object detection models for this task, evaluating YOLOv9 and SSD architectures across three distinct datasets: EU-Moths, NID-Moths, and AMI-Traps. Our experiments reveal that high-resolution inputs are a dominant factor for accurate insect localisation, with performance improving substantially with larger image sizes. In addition, we perform cross-dataset validation to verify the generalisation capabilities of YOLOv9 on these datasets, justifying the choice of the AMI-Traps dataset as our pre-training dataset for obtaining a robust detector. Finally, to leverage large amounts of unlabeled data, we investigate a pseudo-labelling and paste-in data augmentation strategy. While this technique provides only modest improvements in overall detection metrics, qualitative analysis demonstrates that it enhances model robustness, enabling the detection of insects in challenging, low-contrast conditions where a strong baseline model would otherwise fail. In our experiments, YOLOv9 outperforms SSD on the one-class NID-Moths and AMI-Traps datasets with average precisions of 0.951 and 0.742, respectively. On the binary-class AMI-Traps dataset, a larger YOLOv9 model with a 1280x1280 input resolution achieves an average precision of 0.972 for the moth category. These results indicate the importance of data-centric approaches and high-resolution imagery for building effective automated insect monitoring systems.
