< Back to previous page

Project

Deep learning for EEG and audio processing in neuro-steered hearing aids

The healthy human auditory system is able to attend to a particular speaker of interest in a multi-speaker scenario. Current hearing aids, however, cannot sufficiently mimic this ability, possibly leading to far-reaching consequences for hearing impaired people, such as social isolation. For effective noise suppression and correct speaker amplification, it is crucial to detect which speaker of multiple speakers should be attended to. By relating electroencephalography (EEG) signals to speech signals, it has been demonstrated that such decoding is possible from neural activity. In order to perform auditory attention decoding (AAD) in so-called cocktail-party scenarios, we aim to employ adaptive machine learning methods. In contrast to current state-of-the art decoders, investigating a variety of architectures of deep neural networks will likely ameliorate decoding accuracies and lead to better generalizability of decoders across subjects. For real-life situations, it is furthermore indispensable to improve the ability to enable fast switches in attention. Less pre-processing steps and the high flexibility of neural networks will ultimately lead to an end-to-end brain computer interface set-up for real time (closed loop) attention decoding with EEG.

Date:17 Jul 2020 →  15 Oct 2021
Keywords:auditory attention decoding (AAD), deep learning
Disciplines:Computational biomodelling and machine learning, Audiology, Biomedical signal processing
Project type:PhD project