Joint acoustic and language modeling for speech recognition

Jen Tzung Chien*, Chuang Hua Chueh

*Corresponding author for this work

Research output: Contribution to journalJournal Article peer-review

30 Scopus citations

Abstract

In a traditional model of speech recognition, acoustic and linguistic information sources are assumed independent of each other. Parameters of hidden Markov model (HMM) and n-gram are separately estimated for maximum a posteriori classification. However, the speech features and lexical words are inherently correlated in natural language. Lacking combination of these models leads to some inefficiencies. This paper reports on the joint acoustic and linguistic modeling for speech recognition by using the acoustic evidence in estimation of the linguistic model parameters, and vice versa, according to the maximum entropy (ME) principle. The discriminative ME (DME) models are exploited by using features from competing sentences. Moreover, a mutual ME (MME) model is built for sentence posterior probability, which is maximized to estimate the model parameters by characterizing the dependence between acoustic and linguistic features. The N-best Viterbi approximation is presented in implementing DME and MME models. Additionally, the new models are incorporated with the high-order feature statistics and word regularities. In the experiments, the proposed methods increase the sentence posterior probability or model separation. Recognition errors are significantly reduced in comparison with separate HMM and n-gram model estimations from 32.2% to 27.4% using the MATBN corpus and from 5.4% to 4.8% using the WSJ corpus (5K condition).

Original languageEnglish
Pages (from-to)223-235
Number of pages13
JournalSpeech Communication
Volume52
Issue number3
DOIs
StatePublished - 03 2010

Keywords

  • Conditional random field
  • Discriminative training
  • Hidden Markov model
  • Maximum entropy
  • n-Gram
  • Speech recognition

Fingerprint

Dive into the research topics of 'Joint acoustic and language modeling for speech recognition'. Together they form a unique fingerprint.

Cite this