< Back to previous page

Project

Decoding imagined speech from electrocorticography

Technology that translates brain activity into speech could be
transformative for patients deprived of it. Studies have reported
impressive results when decoding intracranial recordings, called
electrocorticography (ECoG), in response to performed or perceived
speech. However, to serve the needs of the speech impaired, one
should aim for non-vocalized speech, also called imagined or inner
speech. Despite intensive efforts, this remains an elusive challenge.
Several observations can be made which we believe prevent
progress. Current speech decoders are developed and tested offline,
per subject and speech mode, as the acutely implanted ECoG
electrodes serve a clinical need and time for experiments is limited.
We propose a new multiway decoder that accounts for the nonlinear
relation between ECoG and speech features, that is fast to respond,
and that facilitates the transfer of model knowledge across speech
modes and subjects. We will then use the decoder, prior developed
based on model transfer, to provide real-time audible feedback to the
subject when attempting imagined speech, which we hypothesize to
be crucial for the subject to master imagined speech production.
When successful, this project can contribute to future developments
based on chronic brain implants aimed at improving the quality of life
of speech-impaired patients.

Date:1 Jan 2022 →  Today
Keywords:Speech Brain Computer Interface, Multiway decoding of Electrocorticography recordings, Imagined speech decoding
Disciplines:Interactive and intelligent systems, Neurological and neuromuscular diseases, Machine learning and decision making