< Back to previous page

Project

Improved speech understanding in cocktail party scenarios through neuro-steered hearing prostheses

People with hearing impairment often have difficulties to understand speech in noisy environments, which is why hearing aids and cochlear implants are usually equipped with speech enhancement algorithms to suppress background noise. However, in so-called ‘cocktail party’ scenarios where multiple speakers talk simultaneously, a fundamental problem appears: how can the algorithm decide which speaker the listener aims to attend to, and which speaker(s) should be treated as noise? The goal of this project is to infer auditory attention from the neural activity of the listener, based on electroencephalography (EEG). We will fuse EEG signals with audio signals recorded by a hearing aid’s microphone array and analyze them jointly in a multimodal framework to perform auditory attention detection (AAD), and to control a speech enhancement algorithm to extract the attended speaker while suppressing interfering speakers and noise. In addition to the signal processing algorithm design, we will perform audiological experiments to identify the boundary conditions in which such a system can operate, and to investigate how the listening conditions influence attention-related neural markers, in particular for hearing-impaired subjects. Using a real-time closed-loop setup we will also investigate neural feedback effects. The ultimate goal is to design a real-time neuro-steered speech enhancement algorithm where the user can switch between multiple speakers by merely attending to one of them.

Date:1 Jan 2018 →  31 Dec 2021
Keywords:Neuro-steering, Hearing prothese, Electroencephalography
Disciplines:Otorhinolaryngology, Speech, language and hearing sciences, Biological system engineering, Biomaterials engineering, Biomechanical engineering, Medical biotechnology, Other (bio)medical engineering