Skip to main content

Adaptive Language Models for Spoken Dialogue Systems

New Image

In this paper, we investigate both generative and statistical approaches for language modeling in spoken dialogue systems. Semantic class-based finite state and n-gram grammars are used for improving coverage and modeling accuracy when little training data is available. We have implemented dialogue-state specific language model adaptation to reduce perplexity and improve the efficiency of grammars for spoken dialogue systems. A novel algorithm for combining state-independent n-gram and state-dependent finite state grammars using acoustic confidence scores is proposed. Using this combination strategy, a relative word error reduction of 12% is achieved for certain dialogue states within a travel reservation task. Finally, semantic class multigrams are proposed and briefly evaluated for language modeling in dialogue systems.