< Back to previous page

Publication

Comprehensive many-to-many phoneme-to-viseme mapping and its application for concatenative visual speech synthesis

Journal Contribution - Journal Article

The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme instead. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both tree-based and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-to-one viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database and the synthesis targets.
Journal: Speech Commun
ISSN: 0167-6393
Issue: 7-8
Volume: 55
Pages: 857-876
Publication year:2013
Keywords:Visual speech synthesis, Viseme classification, Phoneme-to-viseme mapping, Context-dependent visemes, Visemes
  • WoS Id: 000321484100004
  • Scopus Id: 84879068811
CSS-citation score:1