Animal Locomotion Analysis

Oliver Mothes, Daniel Haase, Manuel Amthor


The detailed understanding of animal locomotion plays an important role in many fields of research, e.g., biology, motion science, and robotics. In order to analyze the locomotor system in vivo, high-speed X-ray acquisition is applied where the retrieval of anatomical landmarks is of main interest. To this day, the evaluation of these sequences is a very time-consuming task, since human experts have to manually annotate anatomical landmarks in single images. Therefore, an automation of this task at a minimum level of user interaction is of urgent need.
In this project computer vision principles combining with machine learning methods tackle this task of automation.

X-ray Animal Skeleton Tracking

The detailed understanding of animals in locomotion is a relevant field of research in biology, biomechanics and robotics. To examine the locomotor system of birds in vivo and in a surgically non-invasive manner, high-speed X-ray acquisition is the state of the art. For a biological evaluation, it is crucial to locate relevant anatomical structures of the locomotor system. There is an urgent need for automating this task, as vast amounts of data exist and a manual annotation is extremely time-consuming. We present a biologically motivated skeleton model tracking framework based on a pictorial structure approach which is extended by robust sub-template matching. This combination makes it possible to deal with severe self-occlusions and challenging ambiguities. As opposed to model-driven methods which require a substantial amount of labeled training samples, our approach is entirely data-driven and can easily handle unseen cases. Thus, it is well suited for large scale biological applications at a minimum of manual interaction. We validate the performance of our approach based on 24 real-world X-ray locomotion datasets, and achieve results which are comparable to established methods while clearly outperforming more general approaches.

Animal Locomotion Analysis using Augmented Active Appearance Models

For many fundamental problems and applications in biomechanics, biology, and robotics, an in-depth understanding of animal locomotion is essential. To analyze the locomotion of animals, high-speed X-ray videos are recorded, in which anatomical landmarks of the locomotor system are of main interest and must be located. To date, several thousand sequences have been recorded, which makes a manual annotation of all landmarks practically impossible. Therefore, an automatization of X-ray landmark tracking in locomotion scenarios is worthwhile. However, tracking all landmarks of interest is a very challenging task, as severe self-occlusions of the animal and low contrast are present in the images due to the X-ray modality. For this reason, existing approaches are currently only applicable for very specific subsets of anatomical landmarks. In contrast, our goal is to present a holistic approach which models all anatomical landmarks in one consistent, probabilistic framework. While active appearance models (AAMs) provide a reasonable global modeling framework, they yield poor fitting results when applied on the full set of landmarks. In this paper, we propose to augment the AAM fitting process by imposing constraints from various sources. We derive a general probabilistic fitting approach and show how results of subset AAMs, local tracking, anatomical knowledge, and epipolar constraints can be included. The evaluation of our approach is based on 32 real-world datasets of five bird species which contain 175,942 ground-truth landmark positions provided by human experts. We show that our method clearly outperforms standard AAM fitting and provides reasonable tracking results for all landmark types. In addition, we show that the tracking accuracy of our approach is even sufficient to provide reliable three-dimensional landmark estimates for calibrated datasets.

Online Tracking

Recent advances in the understanding of animal locomotion have proven it to be a key element of many fields in biology, motion science, and robotics. For the analysis of walking animals, high-speed x-ray videography is employed. For a biological evaluation of these x-ray sequences, anatomical landmarks have to be located in each frame. However, due to the motion of the animals, severe occlusions complicate this task and standard tracking methods can not be applied. We present a robust tracking approach which is based on the idea of dividing a template into sub-templates to overcome occlusions. The difference to other sub-template approaches is that we allow soft decisions for the fusion of the single hypotheses, which greatly benefits tracking stability. Also, we show how anatomical knowledge can be included into the tracking process to further improve the performance. Experiments on real datasets show that our method achieves results superior to those of existing robust approaches.

Limb, Joint and Pelvic Kinematic Control in the Quail Coping with Steps Upwards and Downwards
Emanuel Andrada and Oliver Mothes and Heiko Stark and Matthew C. Tresch and Joachim Denzler and Martin S. Fischer and Reinhard Blickhan.
Scientific Reports. 12 (1) : pp. 15901. 2022. DOI: 10.1038/s41598-022-20247-y
[bibtex] [pdf] [web] [abstract]
Uncovering Stability Princicples of Avian Bipedal Uneven Locomotion
Emanuel Andrada and Oliver Mothes and Dirk Arnold and Joachim Denzler and Martin S. Fischer and Reinhard Blickhan.
26th Congress of the European Society of Biomechanics (ESB). 2021.
One-Shot Learned Priors in Augmented Active Appearance Models for Anatomical Landmark Tracking
Oliver Mothes and Joachim Denzler.
Computer Vision, Imaging and Computer Graphics -- Theory and Applications. Pages 85-104. 2019.
[bibtex] [abstract]
Anatomical Landmark Tracking by One-shot Learned Priors for Augmented Active Appearance Models
Oliver Mothes and Joachim Denzler.
International Conference on Computer Vision Theory and Applications (VISAPP). Pages 246-254. 2017.
[bibtex] [pdf] [web]
Mixed Gaits in Small Avian Terrestrial Locomotion
Emanuel Andrada and Daniel Haase and Yefta Sutedja and John A. Nyakatura and Brandon M. Kilbourne and Joachim Denzler and Martin S. Fischer and Reinhard Blickhan.
Scientific Reports. 5 : 2015. DOI: doi:10.1038/srep13636
[bibtex] [web] [abstract]
Comparative Large-Scale Evaluation of Human and Active Appearance Model Based Tracking Performance of Anatomical Landmarks in X-ray Locomotion Sequences
Daniel Haase and John A. Nyakatura and Joachim Denzler.
Pattern Recognition and Image Analysis. Advances in Mathematical Theory and Applications (PRIA). 24 (1) : pp. 86-92. 2014.
[bibtex] [web] [abstract]
Automated Approximation of Center of Mass Position in X-ray Sequences of Animal Locomotion
Daniel Haase and Emanuel Andrada and John A. Nyakatura and Brandon M. Kilbourne and Joachim Denzler.
Journal of Biomechanics. 46 (12) : pp. 2082-2086. 2013.
2D and 3D Analysis of Animal Locomotion from Biplanar X-ray Videos Using Augmented Active Appearance Models
Daniel Haase and Joachim Denzler.
EURASIP Journal on Image and Video Processing. 45 : pp. 1-13. 2013.
[bibtex] [pdf]
Fast and Robust Landmark Tracking in X-ray Locomotion Sequences Containing Severe Occlusions
Manuel Amthor and Daniel Haase and Joachim Denzler.
International Workshop on Vision, Modelling, and Visualization (VMV). Pages 15-22. 2012.
[bibtex] [abstract]
Anatomical Landmark Tracking for the Analysis of Animal Locomotion in X-ray Videos Using Active Appearance Models
Daniel Haase and Joachim Denzler.
Scandinavian Conference on Image Analysis (SCIA). Pages 604-615. 2011.
[bibtex] [pdf] [abstract]
Comparative Evaluation of Human and Active Appearance Model Based Tracking Performance of Anatomical Landmarks in Locomotion Analysis
Daniel Haase and Joachim Denzler.
Open German-Russian Workshop on Pattern Recognition and Image Understanding (OGRW). Pages 96-99. 2011.
[bibtex] [pdf]
Multi-view Active Appearance Models for the X-ray Based Analysis of Avian Bipedal Locomotion
Daniel Haase and John A. Nyakatura and Joachim Denzler.
Annual Symposium of the German Association for Pattern Recognition (DAGM). Pages 11-20. 2011.
[bibtex] [pdf] [abstract]