A parallel vision approach to scene-specific pedestrian detection
Zhang, Wenwen1,2; Wang, Kunfeng2; Liu, Yating2,3; Lu, Yue2,3; Wang, Fei-Yue2
Source PublicationNEUROCOMPUTING
ISSN0925-2312
2020-06-21
Volume394Pages:114-126
Abstract

In recent years, with the development of computing power and deep learning algorithms, pedestrian detection has made great progress. Nevertheless, once a detection model trained on generic datasets (such as PASCAL VOC and MS COCO) is applied to a specific scene, its precision is limited by the distribution gap between the generic data and the specific scene data. It is difficult to train the model for a specific scene, due to the lack of labeled data from that scene. Even though we manage to get some labeled data from a specific scene, the changing environmental conditions make the pre-trained model perform bad. In light of these issues, we propose a parallel vision approach to scene-specific pedestrian detection. Given an object detection model, it is trained via two sequential stages: (1) the model is pre-trained on augmented-reality data, to address the lack of scene-specific training data; (2) the pre-trained model is incrementally optimized with newly synthesized data as the specific scene evolves over time. On publicly available datasets, our approach leads to higher precision than the models trained on generic data. To tackle the dynamically changing scene, we further evaluate our approach on the webcam data collected from Church Street Market Place, and the results are also encouraging. (C) 2019 Elsevier B.V. All rights reserved.

KeywordPedestrian detection Specific scene Synthetic data Video surveillance Parallel vision
DOI10.1016/j.neucom.2019.03.095
WOS KeywordCAMERA NETWORKS
Indexed BySCI
Language英语
Funding ProjectNational Natural Science Foundation of China[61533019] ; National Natural Science Foundation of China[U1811463]
Funding OrganizationNational Natural Science Foundation of China
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000531730600012
PublisherELSEVIER
Sub direction classification人工智能+交通
Citation statistics
Cited Times:7[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/39437
Collection复杂系统管理与控制国家重点实验室_平行智能技术与系统团队
Corresponding AuthorWang, Kunfeng
Affiliation1.Xi An Jiao Tong Univ, Sch Software Engn, Xian 710049, Peoples R China
2.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Zhang, Wenwen,Wang, Kunfeng,Liu, Yating,et al. A parallel vision approach to scene-specific pedestrian detection[J]. NEUROCOMPUTING,2020,394:114-126.
APA Zhang, Wenwen,Wang, Kunfeng,Liu, Yating,Lu, Yue,&Wang, Fei-Yue.(2020).A parallel vision approach to scene-specific pedestrian detection.NEUROCOMPUTING,394,114-126.
MLA Zhang, Wenwen,et al."A parallel vision approach to scene-specific pedestrian detection".NEUROCOMPUTING 394(2020):114-126.
Files in This Item: Download All
File Name/Size DocType Version Access License
1-s2.0-S092523121930(4090KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhang, Wenwen]'s Articles
[Wang, Kunfeng]'s Articles
[Liu, Yating]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhang, Wenwen]'s Articles
[Wang, Kunfeng]'s Articles
[Liu, Yating]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhang, Wenwen]'s Articles
[Wang, Kunfeng]'s Articles
[Liu, Yating]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: 1-s2.0-S0925231219308859-main.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.