CASIA OpenIR  > 智能感知与计算研究中心
A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs
Wu, Zifeng1; Huang, Yongzhen2; Wang, Liang2; Wang, Xiaogang3; Tan, Tieniu2
发表期刊IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
2017-02-01
卷号39期号:2页码:209-226
文章类型Article
摘要This paper studies an approach to gait based human identification via similarity learning by deep convolutional neural networks (CNNs). With a pretty small group of labeled multi-view human walking videos, we can train deep networks to recognize the most discriminative changes of gait patterns which suggest the change of human identity. To the best of our knowledge, this is the first work based on deep CNNs for gait recognition in the literature. Here, we provide an extensive empirical evaluation in terms of various scenarios, namely, cross-view and cross-walking-condition, with different preprocessing approaches and network architectures. The method is first evaluated on the challenging CASIA-B dataset in terms of cross-view gait recognition. Experimental results show that it outperforms the previous state-of-the-art methods by a significant margin. In particular, our method shows advantages when the cross-view angle is large, i.e., no less than 36 degree. And the average recognition rate can reach 94 percent, much better than the previous best result (less than 65 percent). The method is further evaluated on the OU-ISIR gait dataset to test its generalization ability to larger data. OU-ISIR is currently the largest dataset available in the literature for gait recognition, with 4,007 subjects. On this dataset, the average accuracy of our method under identical view conditions is above 98 percent, and the one for cross-view scenarios is above 91 percent. Finally, the method also performs the best on the USF gait dataset, whose gait sequences are imaged in a real outdoor scene. These results show great potential of this method for practical applications.
关键词Deep Learning Cnn Human Identification Gait Cross-view
WOS标题词Science & Technology ; Technology
DOI10.1109/TPAMI.2016.2545669
关键词[WOS]CONVOLUTIONAL NETWORKS ; RECOGNITION ; PERFORMANCE ; BIOMETRICS ; PROJECTION ; IMAGE
收录类别SCI
语种英语
项目资助者National Basic Research Program of China(2012CB316300) ; National Natural Science Foundation of China(61525306 ; 61420106015)
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS记录号WOS:000395553400001
引用统计
被引频次:360[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/14376
专题智能感知与计算研究中心
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Ctr Res Intelligent Percept & Comp, Beijing 100190, Peoples R China
3.Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong, Hong Kong, Peoples R China
第一作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Wu, Zifeng,Huang, Yongzhen,Wang, Liang,et al. A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2017,39(2):209-226.
APA Wu, Zifeng,Huang, Yongzhen,Wang, Liang,Wang, Xiaogang,&Tan, Tieniu.(2017).A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,39(2),209-226.
MLA Wu, Zifeng,et al."A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 39.2(2017):209-226.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
wuzifeng-A Comprehen(2447KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wu, Zifeng]的文章
[Huang, Yongzhen]的文章
[Wang, Liang]的文章
百度学术
百度学术中相似的文章
[Wu, Zifeng]的文章
[Huang, Yongzhen]的文章
[Wang, Liang]的文章
必应学术
必应学术中相似的文章
[Wu, Zifeng]的文章
[Huang, Yongzhen]的文章
[Wang, Liang]的文章
相关权益政策
暂无数据
收藏/分享
文件名: wuzifeng-A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。