Humanoid Pose Estimation through Synergistic Integration of Computer Vision and Deep Learning Techniques*

Chaithra Lokasara Mahadevaswamy*, Jacky Baltes, Hsien Tsung Chang

*此作品的通信作者

研究成果: 圖書/報告稿件的類型會議稿件同行評審

摘要

This study explores the performance of Convolutional Neural Networks (CNNs) in the context of humanoid robot localization in dynamic environments. Utilizing a front-mounted camera system, initial experiments demonstrate CNNs achieving a 72% accuracy in position and a 92% accuracy rate in orientation with an 8000-image dataset. These results underscore the effectiveness of CNNs in addressing the challenge of precise robot localization. Moreover, the study introduces the YOLO (You Only Look Once) object detection algorithm to further enhance performance. Beyond robotics, this research extends to applications in smartphone navigation, Indoor GPS systems, and drone tracking. The paper provides insights into the methodologies employed and highlights the transformative potential of integrating CNNs into localization tasks.

原文英語
主出版物標題ICARM 2024 - 2024 9th IEEE International Conference on Advanced Robotics and Mechatronics
發行者Institute of Electrical and Electronics Engineers Inc.
頁面771-776
頁數6
ISBN(電子)9798350385724
ISBN(列印)9798350385724
DOIs
出版狀態已出版 - 2024
事件9th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2024 - Tokyo, 日本
持續時間: 08 07 202410 07 2024

出版系列

名字ICARM 2024 - 2024 9th IEEE International Conference on Advanced Robotics and Mechatronics

Conference

Conference9th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2024
國家/地區日本
城市Tokyo
期間08/07/2410/07/24

文獻附註

Publisher Copyright:
© 2024 IEEE.

指紋

深入研究「Humanoid Pose Estimation through Synergistic Integration of Computer Vision and Deep Learning Techniques*」主題。共同形成了獨特的指紋。

引用此