CASIA OpenIR  > 复杂系统管理与控制国家重点实验室  > 先进机器人
A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions
Cui, Chengkun1,2; Bian, Gui-Bin1; Hou, Zeng-Guang1,2,3; Zhao, Jun4; Zhou, Hao4
Source PublicationIEEE Transactions on Biomedical Circuits and Systems
2017-08
Volume11Issue:4Pages:889–899
AbstractIn this study, a multimodal fusion framework based on three different modal biosignals is developed to recognize human intentions related to lower limb multi-joint motions which commonly appear in daily life. Electroencephalogram (EEG), electromyogram (EMG) and mechanomyogram (MMG) signals were simultaneously recorded from twelve subjects while performing nine lower limb multi-joint motions. These multimodal data are used as the inputs of the fusion framework for identification of different motion intentions. Twelve fusion techniques are evaluated in this framework and a large number of comparative experiments are carried out. The results show that a support vector machine-based three-modal fusion scheme can achieve average accuracies of 98.61%, 97.78% and 96.85%, respectively, under three different data division forms. Furthermore, the relevant statistical tests reveal that this fusion scheme brings significant accuracy improvement in comparison with the cases of two-modal fusion or only a single modality. These promising results indicate the potential of the multimodal fusion framework for facilitating the future development of human-robot interaction for lower limb rehabilitation.

KeywordElectroencephalogram (Eeg) Electromyogram (Emg) Human-robot Interaction Mechanomyogram (Mmg) Motion Intention Recognition Multimodal Fusion
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/20974
Collection复杂系统管理与控制国家重点实验室_先进机器人
Corresponding AuthorHou, Zeng-Guang
Affiliation1.The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2.University of Chinese Academy of Sciences, Beijing 100049, China
3.The CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing 100190, China
4.Beijing Bo'ai Hospital, China Rehabilitation Research Center, Beijing 100068, China
Recommended Citation
GB/T 7714
Cui, Chengkun,Bian, Gui-Bin,Hou, Zeng-Guang,et al. A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions[J]. IEEE Transactions on Biomedical Circuits and Systems,2017,11(4):889–899.
APA Cui, Chengkun,Bian, Gui-Bin,Hou, Zeng-Guang,Zhao, Jun,&Zhou, Hao.(2017).A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions.IEEE Transactions on Biomedical Circuits and Systems,11(4),889–899.
MLA Cui, Chengkun,et al."A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions".IEEE Transactions on Biomedical Circuits and Systems 11.4(2017):889–899.
Files in This Item: Download All
File Name/Size DocType Version Access License
A Multimodal Framewo(766KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Cui, Chengkun]'s Articles
[Bian, Gui-Bin]'s Articles
[Hou, Zeng-Guang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Cui, Chengkun]'s Articles
[Bian, Gui-Bin]'s Articles
[Hou, Zeng-Guang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Cui, Chengkun]'s Articles
[Bian, Gui-Bin]'s Articles
[Hou, Zeng-Guang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: A Multimodal Framework Based on Integration of Cortical and Muscular Activities for Decoding Human Intentions About Lower Limb Motions.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.