CASIA OpenIR  > 模式识别实验室
Frame-GAN: Increasing the frame rate of gait videos with generative adversarial networks
Xue, Wei1,2; Ai, Hong1; Sun, Tianyu2; Song, Chunfeng2,3; Huang, Yan2,3; Wang, Liang2,3
发表期刊NEUROCOMPUTING
ISSN0925-2312
2020-03-07
卷号380页码:95-104
通讯作者Ai, Hong(aihong@hrbust.edu.cn)
摘要Most existing methods of identifying person except gait recognition require the cooperation of the subjects. Aiming at detecting the pattern of human walking movement, gait recognition takes advantage of the time-serial data and can identify a person distantly. The time-serial data, which is usually presented in video form, always has a limitation in frame rate, which intrinsically affects the performance of the recognition models. In order to increase the frame rate of gait videos, we propose a new kind of generative adversarial networks (GAN) named Frame-GAN to reduce the gap between adjacent frames. Inspired by the recent advances in metric learning, we also propose a new effective loss function named Margin Ratio Loss (MRL) to boost the recognition model. We evaluate the proposed method on the challenging CASIA-B and OU-ISIR gait databases. Extensive experimental results show that the proposed Frame-GAN and MRL are effective. (C) 2019 Elsevier B.V. All rights reserved.
关键词Gait recognition Generative adversarial networks Metric learning Deep learning
DOI10.1016/j.neucom.2019.11.015
关键词[WOS]RECOGNITION ; IMAGE
收录类别SCI
语种英语
资助项目CAS-AIR ; NVIDIA DGX-1 AI Supercomputer ; NVIDIA ; Beijing Science and Technology Project[Z181100008918010] ; Capital Science and Technology Leading Talent Training Project[Z181100006318030] ; National Natural Science Foundation of China[61806194] ; National Natural Science Foundation of China[61420106015] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[61633021] ; National Natural Science Foundation of China[61525306] ; National Key Research and Development Program of China[2016YFB1001000] ; National Key Research and Development Program of China[2016YFB1001000] ; National Natural Science Foundation of China[61525306] ; National Natural Science Foundation of China[61633021] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[61420106015] ; National Natural Science Foundation of China[61806194] ; Capital Science and Technology Leading Talent Training Project[Z181100006318030] ; Beijing Science and Technology Project[Z181100008918010] ; NVIDIA ; NVIDIA DGX-1 AI Supercomputer ; CAS-AIR
项目资助者National Key Research and Development Program of China ; National Natural Science Foundation of China ; Capital Science and Technology Leading Talent Training Project ; Beijing Science and Technology Project ; NVIDIA ; NVIDIA DGX-1 AI Supercomputer ; CAS-AIR
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000507986500010
出版者ELSEVIER
七大方向——子方向分类图像视频处理与分析
引用统计
被引频次:9[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/29542
专题模式识别实验室
通讯作者Ai, Hong
作者单位1.Harbin Univ Sci & Technol, Sch Automat, Harbin 150001, Peoples R China
2.Chinese Acad Sci CASIA, CRIPAC, Inst Automat, Natl Lab Pattern Recognit NLPR, Beijing 100190, Peoples R China
3.UCAS, Beijing 100190, Peoples R China
第一作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Xue, Wei,Ai, Hong,Sun, Tianyu,et al. Frame-GAN: Increasing the frame rate of gait videos with generative adversarial networks[J]. NEUROCOMPUTING,2020,380:95-104.
APA Xue, Wei,Ai, Hong,Sun, Tianyu,Song, Chunfeng,Huang, Yan,&Wang, Liang.(2020).Frame-GAN: Increasing the frame rate of gait videos with generative adversarial networks.NEUROCOMPUTING,380,95-104.
MLA Xue, Wei,et al."Frame-GAN: Increasing the frame rate of gait videos with generative adversarial networks".NEUROCOMPUTING 380(2020):95-104.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xue, Wei]的文章
[Ai, Hong]的文章
[Sun, Tianyu]的文章
百度学术
百度学术中相似的文章
[Xue, Wei]的文章
[Ai, Hong]的文章
[Sun, Tianyu]的文章
必应学术
必应学术中相似的文章
[Xue, Wei]的文章
[Ai, Hong]的文章
[Sun, Tianyu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。