Matthias Körschens, Solveig Franziska Bucher, Paul Bodesheim, Josephine Ulrich, Joachim Denzler, Christine Römermann:
Determining the Community Composition of Herbaceous Species from Images using Convolutional Neural Networks.
Ecological Informatics.
80 :
pp. 102516.
2024.
[bibtex]
[pdf]
[web]
[doi]
[abstract]
Global change has a detrimental impact on the environment and changes biodiversity patterns, which can be observed, among others, via analyzing changes in the composition of plant communities. Typically, vegetation relevées are done manually, which is time-consuming, laborious, and subjective. Applying an automatic system for such an analysis that can also identify co-occurring species would be beneficial as it is fast, effortless to use, and consistent. Here, we introduce such a system based on Convolutional Neural Networks for automatically predicting the species-wise plant cover. The system is trained on freely available image data of herbaceous plant species from web sources and plant cover estimates done by experts. With a novel extension of our original approach, the system can even be applied directly to vegetation images without requiring such cover estimates. Our extended approach, not utilizing dedicated training data, performs similarly to humans concerning the relative species abundances in the vegetation relevées. When trained on dedicated training annotations, it reflects the original estimates more closely than (independent) human experts, who manually analyzed the same sites. Our method is, with little adaptation, usable in novel domains and could be used to analyze plant community dynamics and responses of different plant species to environmental changes.
Matthias Körschens, Solveig Franziska Bucher, Christine Römermann, Joachim Denzler:
Improving Data Efficiency for Plant Cover Prediction with Label Interpolation and Monte-Carlo Cropping.
DAGM German Conference on Pattern Recognition (DAGM-GCPR).
2023.
[bibtex]
[pdf]
[web]
[supplementary]
[abstract]
The plant community composition is an essential indicator of environmental changes and is, for this reason, usually analyzed in ecological field studies in terms of the so-called plant cover. The manual acquisition of this kind of data is time-consuming, laborious, and prone to human error. Automated camera systems can collect high-resolution images of the surveyed vegetation plots at a high frequency. In combination with subsequent algorithmic analysis, it is possible to objectively extract information on plant community composition quickly and with little human effort. An automated camera system can easily collect the large amounts of image data necessary to train a Deep Learning system for automatic analysis. However, due to the amount of work required to annotate vegetation images with plant cover data, only few labeled samples are available. As automated camera systems can collect many pictures without labels, we introduce an approach to interpolate the sparse labels in the collected vegetation plot time series down to the intermediate dense and unlabeled images to artificially increase our training dataset to seven times its original size. Moreover, we introduce a new method we call Monte-Carlo Cropping. This approach trains on a collection of cropped parts of the training images to deal with high-resolution images efficiently, implicitly augment the training images, and speed up training. We evaluate both approaches on a plant cover dataset containing images of herbaceous plant communities and find that our methods lead to improvements in the species, community, and segmentation metrics investigated.
Matthias Körschens, Solveig Franziska Bucher, Christine Römermann, Joachim Denzler:
Unified Automatic Plant Cover and Phenology Prediction.
ICCV Workshop on Computer Vision in Plant Phenotyping and Agriculture (CVPPA).
2023.
[bibtex]
[pdf]
[abstract]
The composition and phenology of plant communities are paramount indicators for environmental changes, especially climate change, and are, due to this, subject to many ecological studies. While species composition and phenology are usually monitored by ecologists directly in the field, this process is slow, laborious, and prone to human error. In contrast, automated camera systems with intelligent image analysis methods can provide fast analyses with a high temporal resolution and therefore are highly advantageous for ecological research. Nowadays, methods already exist that can analyze the plant community composition from images, and others that investigate the phenology of plants. However, there are no automatic approaches that analyze the plant community composition together with the phenology of the same community, which is why we aim to close this gap by combining an existing plant cover prediction method based on convolutional neural networks with a novel phenology prediction module. The module builds on the species- and pixel-wise occurrence probabilities generated during the plant cover prediction process, and by that, significantly improves the quality of phenology predictions compared to isolated training of plant cover and phenology. We evaluate our approach by comparing the time trends of the observed and predicted phenology values on the InsectArmageddon dataset comprising cover and phenology data of eight herbaceous plant species. We find that our method significantly outperforms two dataset-statistics-based prediction baselines as well as a naive baseline that does not integrate any information from the plant cover prediction module.
Matthias Körschens, Paul Bodesheim, Christine Römermann, Solveig Franziska Bucher, Mirco Migliavacca, Josephine Ulrich, Joachim Denzler:
Automatic Plant Cover Estimation with Convolutional Neural Networks.
Computer Science for Biodiversity Workshop (CS4Biodiversity), INFORMATIK 2021.
Pages 499-516.
2021.
[bibtex]
[pdf]
[doi]
[abstract]
Monitoring the responses of plants to environmental changes is essential for plant biodiversity research. This, however, is currently still being done manually by botanists in the field. This work is very laborious, and the data obtained is, though following a standardized method to estimate plant coverage, usually subjective and has a coarse temporal resolution. To remedy these caveats, we investigate approaches using convolutional neural networks (CNNs) to automatically extract the relevant data from images, focusing on plant community composition and species coverages of 9 herbaceous plant species. To this end, we investigate several standard CNN architectures and different pretraining methods. We find that we outperform our previous approach at higher image resolutions using a custom CNN with a mean absolute error of 5.16%. In addition to these investigations, we also conduct an error analysis based on the temporal aspect of the plant cover images. This analysis gives insight into where problems for automatic approaches lie, like occlusion and likely misclassifications caused by temporal changes.
Matthias Körschens, Paul Bodesheim, Christine Römermann, Solveig Franziska Bucher, Mirco Migliavacca, Josephine Ulrich, Joachim Denzler:
Weakly Supervised Segmentation Pretraining for Plant Cover Prediction.
DAGM German Conference on Pattern Recognition (DAGM-GCPR).
Pages 589-603.
2021.
[bibtex]
[pdf]
[doi]
[supplementary]
[abstract]
Automated plant cover prediction can be a valuable tool for botanists, as plant cover estimations are a laborious and recurring task in environmental research. Upon examination of the images usually encompassed in this task, it becomes apparent that the task is ill-posed and successful training on such images alone without external data is nearly impossible. While a previous approach includes pretraining on a domain-related dataset containing plants in natural settings, we argue that regular classification training on such data is insufficient. To solve this problem, we propose a novel pretraining pipeline utilizing weakly supervised object localization on images with only class annotations to generate segmentation maps that can be exploited for a second pretraining step. We utilize different pooling methods during classification pretraining, and evaluate and compare their effects on the plant cover prediction. For this evaluation, we focus primarily on the visible parts of the plants. To this end, contrary to previous works, we created a small dataset containing segmentations of plant cover images to be able to evaluate the benefit of our method numerically. We find that our segmentation pretraining approach outperforms classification pretraining and especially aids in the recognition of less prevalent plants in the plant cover dataset.
Matthias Körschens, Paul Bodesheim, Christine Römermann, Solveig Franziska Bucher, Josephine Ulrich, Joachim Denzler:
Towards Confirmable Automated Plant Cover Determination.
ECCV Workshop on Computer Vision Problems in Plant Phenotyping (CVPPP).
2020.
[bibtex]
[pdf]
[web]
[doi]
[supplementary]
[abstract]
Changes in plant community composition reflect environmental changes like in land-use and climate. While we have the means to record the changes in composition automatically nowadays, we still lack methods to analyze the generated data masses automatically. We propose a novel approach based on convolutional neural networks for analyzing the plant community composition while making the results explainable for the user. To realize this, our approach generates a semantic segmentation map while predicting the cover percentages of the plants in the community. The segmentation map is learned in a weakly supervised way only based on plant cover data and therefore does not require dedicated segmentation annotations. Our approach achieves a mean absolute error of 5.3% for plant cover prediction on our introduced dataset with 9 herbaceous plant species in an imbalanced distribution, and generates segmentation maps, where the location of the most prevalent plants in the dataset is correctly indicated in many images.