CASIA OpenIR  > 复杂系统认知与决策实验室  > 智能系统与工程
A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and causal Relationship
Shiyu, Hu1,2; Dailing, Zhang1,2; Meiqi, Wu3; Xiaokun, Feng1,2; Xuchen, Li4; Xin, Zhao1,2; Kaiqi, Huang1,2,5
2023-12
Conference Namethe 37th Conference on Neural Information Processing Systems
Conference Date2023-12
Conference PlaceNew Orleans
Abstract

Tracking an arbitrary moving target in a video sequence is the foundation for high-level tasks like video understanding. Although existing visual-based trackers have demonstrated good tracking capabilities in short video sequences, they always perform poorly in complex environments, as represented by the recently proposed global instance tracking task, which consists of longer videos with more complicated narrative content. 
Recently, several works have introduced natural language into object tracking, desiring to address the limitations of relying only on a single visual modality. However, these selected videos are still short sequences with uncomplicated spatio-temporal and causal relationships, and the provided semantic descriptions are too simple to characterize video content. To address these issues, we (1) first propose a new multi-modal global instance tracking benchmark named MGIT. It consists of 150 long video sequences with a total of 2.03 million frames, aiming to fully represent the complex spatio-temporal and causal relationships coupled in longer narrative content.  (2) Each video sequence is annotated with three semantic grains (i.e., action, activity, and story) to model the progressive process of human cognition. We expect this multi-granular annotation strategy can provide a favorable environment for multi-modal object tracking research and long video understanding. (3) Besides, we execute comparative experiments on existing multi-modal object tracking benchmarks, which not only explore the impact of different annotation methods, but also validate that our annotation method is a feasible solution for coupling human understanding into semantic labels.  (4) Additionally, we conduct detailed experimental analyses on MGIT, and hope the explored performance bottlenecks of existing algorithms can support further research in multi-modal object tracking. 
The proposed benchmark, experimental results, and toolkit will be released gradually on  http://videocube.aitestunion.com/.

Indexed ByEI
Sub direction classification目标检测、跟踪与识别
planning direction of the national heavy laboratory智能能力评估
Paper associated data
Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/54537
Collection复杂系统认知与决策实验室_智能系统与工程
Affiliation1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
3.School of Computer Science and Technology, University of Chinese Academy of Sciences
4.School of Computer Science, Beijing University of Posts and Telecommunications
5.Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Shiyu, Hu,Dailing, Zhang,Meiqi, Wu,et al. A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and causal Relationship[C],2023.
Files in This Item: Download All
File Name/Size DocType Version Access License
MGIT.pdf(6215KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Shiyu, Hu]'s Articles
[Dailing, Zhang]'s Articles
[Meiqi, Wu]'s Articles
Baidu academic
Similar articles in Baidu academic
[Shiyu, Hu]'s Articles
[Dailing, Zhang]'s Articles
[Meiqi, Wu]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Shiyu, Hu]'s Articles
[Dailing, Zhang]'s Articles
[Meiqi, Wu]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: MGIT.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.