< Terug naar vorige pagina

Publicatie

Joint training of non-negative Tucker decomposition and discrete density hidden Markov models

Tijdschriftbijdrage - Tijdschriftartikel

Non-negative Tucker decomposition (NTD) is applied to unsupervised training of discrete density HMMs for the discovery of sequential patterns in data, for segmenting sequential data into patterns and for recognition of the discovered patterns in unseen data. Structure constraints are imposed on the NTD such that it shares its parameters with the HMM. Two training schemes are proposed: one uses NTD as a regularizer for the Baum-Welch (BW) training of the HMM, the other alternates between initializing the NTD with the BW output and vice versa. On the task of unsupervised spoken pattern discovery from the TIDIGITS database, both training schemes are observed to improve over BW training in terms of pattern purity, accuracy of the segmentation boundaries and accuracy for speech recognition. Furthermore, we experimentally observe that the alternative training of NTD and BW outperforms the NTD regularized BW, BW training and BW training with simulated annealing. © 2012 Elsevier Ltd. All rights reserved.
Tijdschrift: Computer Speech and Language
ISSN: 0885-2308
Issue: 4
Volume: 27
Pagina's: 969 - 988
Jaar van publicatie:2013
BOF-keylabel:ja
IOF-keylabel:ja
BOF-publication weight:1
CSS-citation score:1
Authors from:Higher Education
Toegankelijkheid:Open