Combined hand gesture - Speech model for human action recognition

Sheng Tzong Cheng, Chih Wei Hsu*, Jian Pan Li

*Corresponding author for this work

Research output: Contribution to journalJournal Article peer-review

11 Scopus citations

Abstract

This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions.

Original languageEnglish
Pages (from-to)17098-17129
Number of pages32
JournalSensors
Volume13
Issue number12
DOIs
StatePublished - 12 12 2013
Externally publishedYes

Keywords

  • Hand gesture detection
  • Hand gesture recognition
  • Human behavior
  • Speech recognition

Fingerprint

Dive into the research topics of 'Combined hand gesture - Speech model for human action recognition'. Together they form a unique fingerprint.

Cite this