< Back to previous page

Project

Incrementally learning new classes with generative classification

Learning continually from non-stationary streams of data is a key feature of natural intelligence, but an unsolved problem in deep learning. Especially challenging for deep neural networks is “class-incremental learning”, where a network must learn to distinguish classes not observed together.

In deep learning, the default approach to classification is learning discriminative classifiers. This works great in the i.i.d. setting when all classes are observed together, but when new classes must be learned incrementally, training discriminative classifiers requires often problematic workarounds such as storing data or generative replay. Here, I propose to instead address class-incremental learning with generative classification.

As proof-of-concept, in preliminary work I showed that a naïve generative classifier, with a separate variational autoencoder per class and likelihood estimation through importance sampling, already performs very strongly. To improve the efficiency, scalability and performance of this generative classifier, I propose four further modifications: (1) move the generative modelling objective from the raw inputs to an intermediate network layer; (2) share the encoder network between classes; (3) use fewer importance samples for unlikely classes; and (4) make classification decisions hierarchical. This way I hope to develop generative classification into a practical, efficient and scalable state-of-the-art deep learning method for class-incremental learning.

Date:1 Oct 2022 →  Today
Keywords:Continual / lifelong learning, Generative classification
Disciplines:Statistics, Adaptive agents and intelligent robotics, Cognitive neuroscience, Artificial intelligence, Knowledge representation and machine learning