Incremental Learning
Team
Daphne Auer, Julia Böhlke, Niklas Penzel, Paul Bodesheim
Motivation
Incremental learning refers to updating the parameters of a recognition model with additional (labeled) training data instead of training a new model from scratch with the extended set of training samples. It is also called continuous learning and is highly relevant for applications in which a stream of incoming data (structured in data chunks called experiences) is available and should be exploited for further improving the trained models. It is an important aspect for lifelong learning.
Publications
2017
Large-Scale Gaussian Process Inference with Generalized Histogram Intersection Kernels for Visual Recognition Tasks
Erik Rodner and Alexander Freytag and Paul Bodesheim and Björn Fröhlich and Joachim Denzler.
International Journal of Computer Vision (IJCV). 121 (2) : pp. 253-280. 2017. DOI: 10.1007/s11263-016-0929-y
[bibtex] [pdf] [web] [abstract]
Erik Rodner and Alexander Freytag and Paul Bodesheim and Björn Fröhlich and Joachim Denzler.
International Journal of Computer Vision (IJCV). 121 (2) : pp. 253-280. 2017. DOI: 10.1007/s11263-016-0929-y
[bibtex] [pdf] [web] [abstract]
We present new methods for fast Gaussian process (GP) inference in large-scale scenarios including exact multi-class classification with label regression, hyperparameter optimization, and uncertainty prediction. In contrast to previous approaches, we use a full Gaussian process model without sparse approximation techniques. Our methods are based on exploiting generalized histogram intersection kernels and their fast kernel multiplications. We empirically validate the suitability of our techniques in a wide range of scenarios with tens of thousands of examples. Whereas plain GP models are intractable due to both memory consumption and computation time in these settings, our results show that exact inference can indeed be done efficiently. In consequence, we enable every important piece of the Gaussian process framework - learning, inference, hyperparameter optimization, variance estimation, and online learning - to be used in realistic scenarios with more than a handful of data.
2016
Lifelong Learning for Visual Recognition Systems
Alexander Freytag. 2016. ISBN 9783843929950
[bibtex] [pdf] [web]
Alexander Freytag. 2016. ISBN 9783843929950
[bibtex] [pdf] [web]
Fine-tuning Deep Neural Networks in Continuous Learning Scenarios
Christoph Käding and Erik Rodner and Alexander Freytag and Joachim Denzler.
ACCV Workshop on Interpretation and Visualization of Deep Neural Nets (ACCV-WS). 2016.
[bibtex] [pdf] [web] [supplementary] [abstract]
Christoph Käding and Erik Rodner and Alexander Freytag and Joachim Denzler.
ACCV Workshop on Interpretation and Visualization of Deep Neural Nets (ACCV-WS). 2016.
[bibtex] [pdf] [web] [supplementary] [abstract]
The revival of deep neural networks and the availability of ImageNet laid the foundation for recent success in highly complex recognition tasks. However, ImageNet does not cover all visual concepts of all possible application scenarios. Hence, application experts still record new data constantly and expect the data to be used upon its availability. In this paper, we follow this observation and apply the classical concept of fine-tuning deep neural networks to scenarios where data from known or completely new classes is continuously added. Besides a straightforward realization of continuous fine-tuning, we empirically analyze how computational burdens of training can be further reduced. Finally, we visualize how the networks attention maps evolve over time which allows for visually investigating what the network learned during continuous fine-tuning.
2013
I Want To Know More: Efficient Multi-Class Incremental Learning Using Gaussian Processes
Alexander Lütz and Erik Rodner and Joachim Denzler.
Pattern Recognition and Image Analysis. Advances in Mathematical Theory and Applications (PRIA). 23 (3) : pp. 402-407. 2013.
[bibtex] [pdf]
Alexander Lütz and Erik Rodner and Joachim Denzler.
Pattern Recognition and Image Analysis. Advances in Mathematical Theory and Applications (PRIA). 23 (3) : pp. 402-407. 2013.
[bibtex] [pdf]
2011
Efficient Multi-Class Incremental Learning Using Gaussian Processes
Alexander Lütz and Erik Rodner and Joachim Denzler.
Open German-Russian Workshop on Pattern Recognition and Image Understanding (OGRW). Pages 182-185. 2011.
[bibtex] [pdf] [abstract]
Alexander Lütz and Erik Rodner and Joachim Denzler.
Open German-Russian Workshop on Pattern Recognition and Image Understanding (OGRW). Pages 182-185. 2011.
[bibtex] [pdf] [abstract]
One of the main assumptions in machine learning is that sufficient training data is available in advance and batch learning can be applied. However, because of the dynamics in a lot of applications, this assumption will break down in almost all cases over time. Therefore, classifiers have to be able to adapt themselves when new training data from existing or new classes becomes available, training data is changed or should be even removed. In this paper, we present a method allowing efficient incremental learning of a Gaussian process classifier. Experimental results show the benefits in terms of needed computation times compared to building the classifier from the scratch.