< Back to previous page

Project

Decoding phonemes from electrocorticography: a cross-modal approach

Technology that decodes speech from brain activity directly could be transformative for the speech-impaired. Several studies have reported impressive results, albeit that the proposed decoders were developed primarily for performed speech, a facility these patients are lacking while they could still be capable of issuing imagined (covert) speech. If speech decoding is ever to become a viable alternative for them, another strategy will be required. In the proposed project, we aim to investigate whether decoders trained on brain activity during perceived speech (i.e., listening) can be used during imagined speech. We will record high quality signals from the cortical surface using electrocorticography implants, and the full frequency spectrum used to produce key features of the speech signal. As it is currently unclear how to exploit these signals across speech modalities (i.e., listening, speaking and imagining), we propose a gradual approach. We will first investigate how to optimally decode speech intention across modalities, and use this knowledge to decode phonemes of the intended speech signal, both using the full electrocorticography spectrum. When successful, this project can stand in the roots of future studies aimed at demonstrating speech neuroprosthetics for the speech-impaired.

Date:1 Oct 2021 →  15 Oct 2021
Keywords:Brain-computer interfacing (BCI), speech decoder, electrocorticography (ECoG), deep learning
Disciplines:Machine learning and decision making, Biomedical signal processing, Neurophysiology, Electrophysiology