A multilingual Automatic Speech Recognition (ASR) engine embedded on Personal Digital Assistant (PDA)

Hong Wen Sie*, Dau Cheng Lyu, Zhong Ing Liou, Ren Yuan Lyu, Yuang Chin Chiang

*Corresponding author for this work

Research output: Contribution to conferenceConference Paperpeer-review

1 Scopus citations

Abstract

In the paper, we describe a multilingual ASR engine embedded on PDA. Our ASR can support multiple languages including Mandarin, Taiwanese and English simultaneously based on a unified three-layer framework, and a one-stage searching strategy. In the framework, there is a unified acoustic models for all the considered languages, a multiple pronunciation lexicon and a searching network, whose nodes represent the Chinese characters and English syllables with their multiple pronunciations. Under the architecture the system can not only reduce its memory and computational complexity but also deal with the issues about a character with multiple pronunciations. In general, the computer resource of PDA is quite limited when compared to PC. In this paper, much work has been done to alleviate the limitation of PDA. The experimental results show the system has good performance where the recognition rate achieves about 90% in the voiced command task with limited vocabulary.

Original languageEnglish
Pages174-177
Number of pages4
StatePublished - 2005
Event9th IEEE International Workshop on Cellular Neural Networks and their Applications, CNNA - Hsinchu, Taiwan
Duration: 28 05 200530 05 2005

Conference

Conference9th IEEE International Workshop on Cellular Neural Networks and their Applications, CNNA
Country/TerritoryTaiwan
CityHsinchu
Period28/05/0530/05/05

Fingerprint

Dive into the research topics of 'A multilingual Automatic Speech Recognition (ASR) engine embedded on Personal Digital Assistant (PDA)'. Together they form a unique fingerprint.

Cite this