POD: Practical Object Detection with Scale-Sensitive Network | |
Peng, Junran1,2,3; Sun, Ming2; Zhang, Zhaoxiang1,3; Tan, Tieniu1,3; Yan, Junjie2 | |
2019 | |
会议名称 | International Conference on Computer Vision |
会议日期 | 2019 |
会议地点 | 韩国 |
摘要 | Training with more data has always been the most stable and effective way of improving performance in deep learn- ing era. As the largest object detection dataset so far, Open Images brings great opportunities and challenges for object detection in general and sophisticated scenarios. However, owing to its semi-automatic collecting and labeling pipeline to deal with the huge data scale, Open Images dataset suf- fers from label-related problems that objects may explicitly or implicitly have multiple labels and the label distribution is extremely imbalanced. In this work, we quantitatively an- alyze these label problems and provide a simple but effec- tive solution. We design a concurrent softmax to handle the multi-label problems in object detection and propose a soft- sampling methods with hybrid training scheduler to deal with the label imbalance. Overall, our method yields a dra- matic improvement of 3.34 points, leading to the best single model with 60.90 mAP on the public object detection test set of Open Images. And our ensembling result achieves 67.17 mAP, which is 4.29 points higher than the best result of Open Images public test 2018. |
关键词 | Object detection |
收录类别 | EI |
语种 | 英语 |
七大方向——子方向分类 | 目标检测、跟踪与识别 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/42201 |
专题 | 智能感知与计算研究中心 |
通讯作者 | Zhang, Zhaoxiang |
作者单位 | 1.University of Chinese Academy of Sciences 2.SenseTime Group Limited 3.Center for Research on Intelligent Perception and Computing, CASIA |
第一作者单位 | 模式识别国家重点实验室 |
通讯作者单位 | 模式识别国家重点实验室 |
推荐引用方式 GB/T 7714 | Peng, Junran,Sun, Ming,Zhang, Zhaoxiang,et al. POD: Practical Object Detection with Scale-Sensitive Network[C],2019. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
pjr_ICCV_POD.pdf(1045KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论