CASIA OpenIR  > 智能感知与计算研究中心
A multi-task deep network for person re-identification
Weihua Chen1; Xiaotang Chen1; Jianguo Zhang2; Kaiqi Huang1
2017
会议名称American Association for AI National Conference(AAAI)
会议日期2017-2-8
会议地点San Francisco, USA
摘要Person re-identification (ReID) focuses on identifying people across different scenes in video surveillance, which is usually formulated as a binary classification task or a ranking
task in current person ReID approaches. In this paper, we take both tasks into account and propose a multi-task deep network (MTDnet) that makes use of their own sdvantages and jointly optimize the two tasks simultaneously for person ReID. To the best of our knowledge, we are the first to integrate both tasks in one network to solve the person ReID. We show that our proposed architecture significantly boosts the performance. Furthermore, deep architecture in general requires a sufficient dataset for training, which is usually not met in person ReID. To cope with this situation, we further
extend the MTDnet and propose a cross-domain architecture that is capable of using an auxiliary set to assist training on small target sets. In the experiments, our approach outperforms most of existing person ReID algorithms on representative datasets including CUHK03, CUHK01, VIPeR, iLIDS and PRID2011, which clearly demonstrates the effectiveness of the proposed approach.
收录类别EI
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/19645
专题智能感知与计算研究中心
作者单位1.中国科学院自动化研究所
2.英国邓迪大学
推荐引用方式
GB/T 7714
Weihua Chen,Xiaotang Chen,Jianguo Zhang,et al. A multi-task deep network for person re-identification[C],2017.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
publication.pdf(2855KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Weihua Chen]的文章
[Xiaotang Chen]的文章
[Jianguo Zhang]的文章
百度学术
百度学术中相似的文章
[Weihua Chen]的文章
[Xiaotang Chen]的文章
[Jianguo Zhang]的文章
必应学术
必应学术中相似的文章
[Weihua Chen]的文章
[Xiaotang Chen]的文章
[Jianguo Zhang]的文章
相关权益政策
暂无数据
收藏/分享
文件名: publication.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。