Tinne Tuytelaars: Keep on learning without forgetting
A core assumption behind most machine learning methods is that training data should be representative for the data seen at test time. While this seems almost trivial, it is, in fact, a particularly challenging condition to meet in real world applications of machine learning: the world evolves and distributions shift over time in an unpredictable way (think of changing weather conditions, fashion trends, social hypes, wear and tear, etc.). This means models get outdated and in practice need to be re-trained over and over again. A particular subfield of machine learning, known as continual learning, aims at addressing these issues. The goal is to develop learning schemes that can learn from non-i.i.d. distributed data. The challenges are to realise this without storing all the training data (ideally none at all), with fixed memory and model capacity, and without forgetting concepts learned previously. In this talk, I will give an overview of recent work in this direction, with a focus on learning deep models for computer vision.
The lecture will be 45 minutes plus questions. Questions should be oral and at the end of the lecture.