Context-Aware Dynamic Feature Extraction for 3D Object Detection in Point Clouds
Tian, Yonglin1,2; Huang, Lichao3; Yu, Hui4; Wu, Xiangbin5; Li, Xuesong2; Wang, Kunfeng6; Wang, Zilei1; Wang, Fei-Yue2
发表期刊IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
ISSN1524-9050
2021-07-16
页码13
通讯作者Wang, Fei-Yue(feiyue.wang@ia.ac.cn)
摘要Varying density of point clouds increases the difficulty of 3D detection. In this paper, we present a context-aware dynamic network (CADNet) to capture the variance of density by considering both point context and semantic context. Point-level contexts are generated from original point clouds to enlarge the effective receptive filed. They are extracted around the voxelized pillars based on our extended voxelization method and processed with the context encoder in parallel with the pillar features. With a large perception range, we are able to capture the variance of features for potential objects and generate attentive spatial guidance to help adjust the strengths for different regions. In the region proposal network, considering the limited representation ability of traditional convolution where same kernels are shared among different samples and positions, we propose a decomposable dynamic convolutional layer to adapt to the variance of input features by learning from the local semantic context. It adaptively generates the position-dependent coefficients for multiple fixed kernels and combines them to convolve with local features. Based on our dynamic convolution, we design a dual-path convolution block to further improve the representation ability. We conduct experiments on KITTI dataset and the proposed CADNet has achieved superior performance of 3D detection outperforming SECOND and PointPillars by a large margin at the speed of 30 FPS.
关键词Three-dimensional displays Feature extraction Convolution Proposals Kernel Laser radar Semantics Point clouds 3D detection dynamic network context features
DOI10.1109/TITS.2021.3095719
收录类别SCI
语种英语
资助项目Intel Collaborative Research Institute for Intelligent and Automated Connected Vehicles (ICRI-IACV) ; National Natural Science Foundation of China[62076020]
项目资助者Intel Collaborative Research Institute for Intelligent and Automated Connected Vehicles (ICRI-IACV) ; National Natural Science Foundation of China
WOS研究方向Engineering ; Transportation
WOS类目Engineering, Civil ; Engineering, Electrical & Electronic ; Transportation Science & Technology
WOS记录号WOS:000732917100001
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类人工智能+交通
引用统计
被引频次:11[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/46840
专题多模态人工智能系统全国重点实验室_平行智能技术与系统团队
通讯作者Wang, Fei-Yue
作者单位1.Univ Sci & Technol China, Dept Automat, Hefei 230027, Anhui, Peoples R China
2.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
3.Horizon Robot, Beijing 100190, Peoples R China
4.Univ Portsmouth, Sch Creat Technol, Portsmouth PO1 2UP, Hants, England
5.Intel Labs, Beijing 100190, Peoples R China
6.Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Tian, Yonglin,Huang, Lichao,Yu, Hui,et al. Context-Aware Dynamic Feature Extraction for 3D Object Detection in Point Clouds[J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS,2021:13.
APA Tian, Yonglin.,Huang, Lichao.,Yu, Hui.,Wu, Xiangbin.,Li, Xuesong.,...&Wang, Fei-Yue.(2021).Context-Aware Dynamic Feature Extraction for 3D Object Detection in Point Clouds.IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS,13.
MLA Tian, Yonglin,et al."Context-Aware Dynamic Feature Extraction for 3D Object Detection in Point Clouds".IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS (2021):13.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Tian, Yonglin]的文章
[Huang, Lichao]的文章
[Yu, Hui]的文章
百度学术
百度学术中相似的文章
[Tian, Yonglin]的文章
[Huang, Lichao]的文章
[Yu, Hui]的文章
必应学术
必应学术中相似的文章
[Tian, Yonglin]的文章
[Huang, Lichao]的文章
[Yu, Hui]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。