|Gao, Junyu; Yang, Xiaoshan; Zhang, Tianzhu; Xu, Changsheng
|Other Abstract||The traditional tracking methods(e.g. L1 tracker) generally adopt the pixel values as feature representation, and ignore the deep visual features of image patches. In a fixed video scene of the real world, we realize that we can usually find an area where the targets have clear appearance and are easy to distinguish. Therefore, in this paper, we select a region in each video to construct training set for deep model learning. In the proposed deep model, we design a deep convolutional neural network which has two symmetrical paths with the shared weights. The goal of the proposed deep network is to reduce the difference between the features of a target out of the region and in the region. As a result, the learned deep network can enhance the appearance feature of targets and benefit the trackers that utilize low-level feature, such as L1 tracker . Finally, we utilize this pre-trained deep convolutional network in the L1 tracker to extract features for sparse representation. Consequently, our method achieves the robustness in tracking for handling the challenges such as occlusion and illumination changes. We evaluate the proposed approach on 25 challenging videos against with 9 state-of-the-art trackers. The extensive results show that the proposed algorithm is 0.11 higher than the second best with average overlap, and is 1.0 lower than the second best with the average center location errors.|
Gao, Junyu,Yang, Xiaoshan,Zhang, Tianzhu,等. 基于深度学习的鲁棒性视觉跟踪方法[J]. 计算机学报,2016,39(7):1419-1434.
Gao, Junyu,Yang, Xiaoshan,Zhang, Tianzhu,&Xu, Changsheng.(2016).基于深度学习的鲁棒性视觉跟踪方法.计算机学报,39(7),1419-1434.
Gao, Junyu,et al."基于深度学习的鲁棒性视觉跟踪方法".计算机学报 39.7(2016):1419-1434.
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.