< Back to previous page

Project

Decoding speech from the brain using deep neural networks

A growing number of hearing-impaired people benefit from a hearing

aid. Due to the current labour-intensive behavioural diagnostics of

the auditory system, hearing aids are not sufficiently adapted to

individual users, as only a limited number of tests can be conducted

per patient.

To address this, we will develop a new measure of brain activity that

will allow automatic and fine-grained diagnostics of the auditory

system. Subjects will listen to natural speech while we record the

electroencephalogram (EEG). Our system will classify which

phonemes/syllables/words from the stimulus are represented in the

EEG, based on state-of-the art, deep-neural- network-based systems

for automatic speech recognition. We will then use the percentage

correctly classified EEG segments as a proxy for the function of the

auditory system, enabling applications such as diagnostics of speech

and language disorders.

Next we will generalize the system to directly decode speech from

the EEG signal, with applications in brain-computer-interfaces.

In the process, we will establish a framework for the application of

deep learning techniques to temporal analysis of EEG signals,

inspired by systems for automatic speech recognition

Date:1 Oct 2018 →  4 Dec 2023
Keywords:Deep learning, EEG decoding, Hearing, brain computer interface
Disciplines:Neurosciences, Biological and physiological psychology, Cognitive science and intelligent systems, Developmental psychology and ageing
Project type:PhD project