
Abstract: Despite their success in many tasks, deep learning-based models are ill-equipped for incremental learning, i.e., adapting a model, originally trained on a set of classes, to additionally detect objects of some new classes, in the absence of the training data of the original classes. They suffer from "catastrophic forgetting"---an abrupt degradation of performance on the original set of classes, when the training objective is adapted to the new classes. This phenomenon has been known for over a couple of decades in the context of feedforward fully connected networks, and is now being addressed in the context of modern neural network architectures. In the tutorial, we plan to provide a comprehensive description of the main categories of incremental learning methods, e.g., based on distillation loss, growing the capacity of the network, introducing regularization constraints, or using autoencoders to capture knowledge from the initial training set, as well as recent advances in the context of self-supervised learning.
Bio: Karteek Alahari is a senior researcher (known as chargé de recherche in France, which is equivalent to a tenured associate professor) at Inria. He is based in the Thoth research team at the Inria Grenoble - Rhône-Alpes center. He was previously a postdoctoral fellow in the Inria WILLOW team at the Department of Computer Science in ENS (École Normale Supérieure), after completing his PhD in 2010 in the UK. His current research focuses on addressing the visual understanding problem in the context of large-scale datasets. In particular, he works on learning robust and effective visual representations, when only partially-supervised data is available. This includes frameworks such as incremental learning, weakly-supervised learning, adversarial training, etc. Dr. Alahari's research has been funded by a Google research award, the French national research agency, and other industrial grants, including Facebook, NaverLabs Europe, Valeo.