< Back to previous page

Project

Speech brain-computer interface based on human intracranial recordings

Technology that decodes intended speech from brain activity directly could be used to bypass the impaired neural pathways of patients that are unable to vocalize speech. Initial attempts have returned impressive results when decoding brain activity during performed or perceived speech. However, to serve the needs of the speech- impaired one should aim for non-vocalized speech, also called imagined speech. A promising strategy is to transfer knowledge between speech modalities, as they share brain regions, to arrive at imagined speech decoding. This is also the objective of the proposed project. High quality neural responses will be recorded directly from the cortical surface using electrocorticography and, unlike previous studies that employed only high frequency rhythms, the whole frequency spectrum is used to predict the intended speech signal. A proof-of-concept application will be developed in which imagined speech will be decoded and the corresponding audio signal synthesized in real-time, in this way closing the loop from intended to generated and perceived speech. When successful, this project can contribute to assistive technology aimed at improving the quality of life of speech-impaired patients.

Date:26 Oct 2021 →  Today
Keywords:electrocorticography, speech brain-computer interface, machine learning
Disciplines:Machine learning and decision making, Audio and speech processing, Pattern recognition and neural networks, Cognitive neuroscience, Neurophysiology
Project type:PhD project