< Back to previous page

Publication

Linear versus deep learning methods for noisy speech separation for EEG-informed attention decoding

Journal Contribution - Journal Article

OBJECTIVE: A hearing aid's noise reduction algorithm cannot infer to which speaker the user intends to listen to. Auditory attention decoding (AAD) algorithms allow to infer this information from neural signals, which leads to the concept of neuro-steered hearing aids. We aim to evaluate and demonstrate the feasibility of AAD-supported speech enhancement in challenging noisy conditions based on electroencephalography recordings. APPROACH: The AAD performance with a linear versus a deep neural network (DNN) based speaker separation was evaluated for same-gender speaker mixtures using three different speaker positions and three different noise conditions. MAIN RESULTS: AAD results based on the linear approach were found to be at least on par and sometimes even better than pure DNN-based approaches in terms of AAD accuracy in all tested conditions. However, when using the DNN to support a linear data-driven beamformer, a performance improvement over the purely linear approach was obtained in the most challenging scenarios. The use of multiple microphones was also found to improve speaker separation and AAD performance over single-microphone systems. SIGNIFICANCE: Recent proof-of-concept studies in this context each focus on a different method in a different experimental setting, which makes it hard to compare them. Furthermore, they are tested in highly idealized experimental conditions, which are still far from a realistic hearing aid setting. This work provides a systematic comparison of a linear and non-linear neuro-steered speech enhancement model, as well as a more realistic validation in challenging conditions.
Journal: Journal of Neural Engineering
ISSN: 1741-2560
Issue: 4
Volume: 17
Publication year:2020
BOF-keylabel:yes
IOF-keylabel:yes
BOF-publication weight:1
CSS-citation score:2
Authors from:Higher Education
Accessibility:Open