< Terug naar vorige pagina

Publicatie

Active Appearance Models for Photorealistic Visual Speech Synthesis

Tijdschriftbijdrage - Tijdschriftartikel Conferentiebijdrage

The perceived quality of a synthetic visual speech signal greatly depends on the smoothness of the presented visual articulators. This paper explains how concatenative visual speech synthesis systems can apply active appearance models to achieve a smooth and natural visual output speech. By modeling the visual speech contained in the system's speech database, a diversification between the synthesis of the shape and the texture of the talking head is feasible. This allows the system to accurately balance between the articulation strength of the visual articulators and the signal smoothness of the visual mode in order to optimize the synthesis. To improve the synthesis quality, an automatic database normalization strategy has been designed that removes variations from the database which are not related to speech production. As was verified by a perception experiment, this normalization strategy significantly improves the perceived signal quality.
Tijdschrift: Proceedings of Interspeech
ISSN: 1990-9772
Issue: 2010
Pagina's: 1113-1116
Jaar van publicatie:2010
Trefwoorden:audiovisual speech synthesis, AAM modeling
  • Scopus Id: 79959829115