< Back to previous page

Publication

Heard or understood? Neural markers of speech understanding

Book - Dissertation

Can we determine, from someone's brain activity, whether speech is heard or understood? This is the central question of this doctoral thesis. Speech is way more than a random collection of sounds. It consists of specific sounds, making up words that form sentences to convey a message that one can understand. To find the answer to this question, we focused on a phenomenon called neural speech tracking. When someone hears speech, their brain responds to specific characteristics of the incoming speech in a time-locked fashion. Previous neural tracking studies mainly focused on acoustic characteristics of speech, i.e., how the brain responds to variations in the acoustic energy of the sound associated with the speech. These studies showed promising results: when speech is presented with increasing background noise, and thus understanding speech becomes challenging, acoustic speech tracking decreases. However, acoustic speech tracking was also observed for an ignored talker or music while these sounds were not understood. So if speech is understood, the listener's brain tracks the acoustics of the speech. However, vice versa, this is not necessarily true: when acoustic tracking is observed, the speech is not necessarily understood. To overcome this, we focused on how the brain tracks speech characteristics derived from the content of the speech, i.e., linguistic speech tracking. Firstly, we identified which speech characteristics could be used to assess linguistic speech tracking. These speech characteristics must be derived from the content of the speech and must explain patterns in the brain responses that cannot be explained when only acoustic characteristics of the speech are used. Using these criteria, we observed that four speech characteristics were good candidates to assess linguistic speech tracking: phoneme surprisal, phoneme entropy, word surprisal, and word frequency. Subsequently, we verified whether linguistic speech tracking captures speech understanding using two paradigms. In the first paradigm, we artificially manipulated the level of speech understanding by changing the speech rate. Our results confirmed that linguistic tracking captures changes in the level of speech understanding: as the speech rate becomes faster, speech understanding decreases, and so does linguistic tracking. In another paradigm, we did not artificially manipulate speech understanding. We presented three natural spoken speech materials: a comprehensible story, an incomprehensible story, and a word list. Only for the comprehensible story we observed linguistic tracking. Altogether, this indicated that linguistic tracking captures the effects of speech understanding. Difficulties with understanding speech often arise in older adults, for example, due to age-related hearing loss or due to brain damage caused by a stroke. Therefore, we assessed how linguistic speech tracking evolves across the lifespan. Linguistic speech tracking decreased with increasing age, even though the older adults in the study understood the speech. This suggests potential methodological issues to robustly assess linguistic speech tracking in older adults, which require more research. In sum, we critically evaluated and verified whether linguistic speech tracking can indicate speech understanding. Our results suggest that linguistic speech tracking could be a valuable tool for objectively assessing speech understanding.
Publication year:2023
Accessibility:Open