< Back to previous page

Project

Continual learning in speech processing

Deep neural networks have enabled a major leap forward in speech processing. However, we still observe a degradation in accuracy when deployed in new domains. This can be addressed through adaptation of the network on a limited set of in-domain data. Doing so will modify the weights in the network and will modify the performance for the domain(s) the network was trained on originally. Worst case, this leads to 'catastrophic forgetting' where the adapted network fails completely on the original task. The research question is how to adapt or extend the network to new domains or tasks such that no degradation occurs on domains it has learned from before, taking practical constraints of limited memory and computational load into account.

Date:31 Jul 2020 →  Today
Keywords:Deep Neural Networks, Speech Processing, Speech Recognition, Natural Language Processing, Adaptation, Transfer Learning
Disciplines:Audio and speech processing, Audio and speech computing, Natural language processing
Project type:PhD project