< Back to previous page

Project

Continual Learning: how to defy catastrophical forgetting

Lately there have been impressive improvements in the fields of Machine Intellegence and AI. The problem with these recent models is that they are not able to adapt or evolve. This is problematic when the model has to learn a new category or label, a new action or behavior or something else. The only way a static model can adapt and learn something new is by retraining on all previous data and the new data combined. This means that all training data should be stored and a lot of time is spent retraining a model. Storing data requires memory and might violate privacy concerns, retraining models takes time and might not be possible on the devices where a model is deployed. These systems also assume the data they are trained to be i.d.d. (independent and identically distributed). In other words, they assume the training data is a perfect representation of the real-world, and two consecutive data points are not correlated. Having a training set that perfectly represents the real world is almost impossible and real-world data is very often correlated. A possible solution to all this is represented by continual learning, which aims to learn new tasks from non-iid data while retaining older knowledge. The forgetting of these older knowledge is called catastrophic forgetting and is one of the major challenges in the continual learning setup.

Date:1 Oct 2020 →  Today
Keywords:Continual Learning, Computer Vision, Machine Learning, Catastrophic Forgetting
Disciplines:Computer vision, Machine learning and decision making
Project type:PhD project