< Back to previous page

Publication

Simulating speech processing with cochlear implants: How does channel interaction affect learning in neural networks?

Journal Contribution - e-publication

We introduce a novel machine learning approach for investigating speech processing with cochlear implants (CIs)-prostheses used to replace a damaged inner ear. Concretely, we use a simple perceptron and a deep convolutional network to classify speech spectrograms that are modified to approximate CI-delivered speech. Implant-delivered signals suffer from reduced spectral resolution, chiefly due to a small number of frequency channels and a phenomenon called channel interaction. The latter involves the spread of information from neighboring channels to similar populations of neurons and can be modeled by linearly combining adjacent channels. We find that early during training, this input modification degrades performance if the networks are first pre-trained on high-resolution speech-with a larger number of channels, and without added channel interaction. This suggests that the spectral degradation caused by channel interaction alters the signal to conflict with perceptual expectations acquired from high-resolution speech. We thus predict that a reduction of channel interaction will accelerate learning in CI users who are implanted after having adapted to high-resolution speech during normal hearing. (The code for replicating our experiments is available online: https://github.com/clips/SimulatingCochlearImplants).
Journal: PLoS ONE
ISSN: 1932-6203
Volume: 14
Pages: 1 - 13
Publication year:2019
Keywords:A1 Journal article
BOF-keylabel:yes
BOF-publication weight:2
CSS-citation score:1
Authors from:Higher Education
Accessibility:Open