< Back to previous page

Project

Representation learning for visual brain-computer interfaces

Brain-computer interfaces (BCI) facilitate high-bandwidth human-computer and, ultimately, human-human communications. For that, brain activity to present or past sensory experiences are decoded. Inspired by recent breakthroughs toward decoding responses to speech, this project aims to design a new data-driven methodology to identify and quantify the temporal coupling between natural video footage and its elicited responses. The most popular non-invasive method of capturing postsynaptic potentials is electroencephalography (EEG) because it is inexpensive, portable, and provides excellent temporal resolution to track neural responses that are time-locked to a sensory stimulus. However, data scarcity and a low signal-to-noise ratio are major challenges. In addition, traditional BCI paradigms rely on controlled environments and the active user participation, making the integration of such paradigms into real-world use cases very difficult.

Date:30 Sep 2022 →  Today
Keywords:multimodal representation learning
Disciplines:Human-computer interaction
Project type:PhD project