Deep Spatial and Temporal Network for Robust Visual Object Tracking
Teng, Zhu1; Xing, Junliang2; Wang, Qiang2; Zhang, Baopeng1; Fan, Jianping3
发表期刊IEEE TRANSACTIONS ON IMAGE PROCESSING
ISSN1057-7149
2020
卷号29页码:1762-1775
通讯作者Xing, Junliang(jlxing@nlpr.ia.ac.cn)
摘要There are two key components that can be leveraged for visual tracking: (a) object appearances; and (b) object motions. Many existing techniques have recently employed deep learning to enhance visual tracking due to its superior representation power and strong learning ability, where most of them employed object appearances but few of them exploited object motions. In this work, a deep spatial and temporal network (DSTN) is developed for visual tracking by explicitly exploiting both the object representations from each frame and their dynamics along multiple frames in a video, such that it can seamlessly integrate the object appearances with their motions to produce compact object appearances and capture their temporal variations effectively. Our DSTN method, which is deployed into a tracking pipeline in a coarse-to-fine form, can perceive the subtle differences on spatial and temporal variations of the target (object being tracked), and thus it benefits from both off-line training and online fine-tuning. We have also conducted our experiments over four largest tracking benchmarks, including OTB-2013, OTB-2015, VOT2015, and VOT2017, and our experimental results have demonstrated that our DSTN method can achieve competitive performance as compared with the state-of-the-art techniques. The source code, trained models, and all the experimental results of this work will be made public available to facilitate further studies on this problem.
关键词Target tracking Visualization Biological system modeling Correlation Training Benchmark testing Visual tracking deep network spatial-temporal LSTM
DOI10.1109/TIP.2019.2942502
收录类别SCI
语种英语
资助项目Natural Science Foundation of China[61972027] ; Natural Science Foundation of China[61672519] ; Natural Science Foundation of China[61872035] ; Fundamental Research Funds for the Central Universities of China[2019JBM022] ; Natural Science Foundation of China[61972027] ; Natural Science Foundation of China[61672519] ; Natural Science Foundation of China[61872035] ; Fundamental Research Funds for the Central Universities of China[2019JBM022]
项目资助者Natural Science Foundation of China ; Fundamental Research Funds for the Central Universities of China
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS记录号WOS:000501324900008
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类图像视频处理与分析
引用统计
被引频次:13[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/29340
专题复杂系统认知与决策实验室_智能系统与工程
通讯作者Xing, Junliang
作者单位1.Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
3.Univ North Carolina Charlotte, Dept Comp Sci, Charlotte, NC 28223 USA
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Teng, Zhu,Xing, Junliang,Wang, Qiang,et al. Deep Spatial and Temporal Network for Robust Visual Object Tracking[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2020,29:1762-1775.
APA Teng, Zhu,Xing, Junliang,Wang, Qiang,Zhang, Baopeng,&Fan, Jianping.(2020).Deep Spatial and Temporal Network for Robust Visual Object Tracking.IEEE TRANSACTIONS ON IMAGE PROCESSING,29,1762-1775.
MLA Teng, Zhu,et al."Deep Spatial and Temporal Network for Robust Visual Object Tracking".IEEE TRANSACTIONS ON IMAGE PROCESSING 29(2020):1762-1775.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Teng, Zhu]的文章
[Xing, Junliang]的文章
[Wang, Qiang]的文章
百度学术
百度学术中相似的文章
[Teng, Zhu]的文章
[Xing, Junliang]的文章
[Wang, Qiang]的文章
必应学术
必应学术中相似的文章
[Teng, Zhu]的文章
[Xing, Junliang]的文章
[Wang, Qiang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。