Knowledge Commons of Institute of Automation,CAS
Recent Advances in Conventional and Deep Learning-Based Depth Completion: A Survey | |
Xie, Zexiao1; Yu, Xiaoxuan1; Gao, Xiang1,2; Li, Kunqian1; Shen, Shuhan2,3,4 | |
发表期刊 | IEEE Transactions on Neural Networks and Learning Systems |
ISSN | 2162-237X |
2022 | |
页码 | 1-21 |
摘要 | Depth completion aims to recover pixelwise depth from incomplete and noisy depth measurements with or without the guidance of a reference RGB image. This task attracted considerable research interest due to its importance in various computer vision-based applications, such as scene understanding, autonomous driving, 3-D reconstruction, object detection, pose estimation, trajectory prediction, and so on. As the system input, an incomplete depth map is usually generated by projecting the 3-D points collected by ranging sensors, such as LiDAR in outdoor environments, or obtained directly from RGB-D cameras in indoor areas. However, even if a high-end LiDAR is employed, the obtained depth maps are still very sparse and noisy, especially in the regions near the object boundaries, which makes the depth completion task a challenging problem. To address this issue, a few years ago, conventional image processing-based techniques were employed to fill the holes and remove the noise from the relatively dense depth maps obtained by RGB-D cameras, while deep learning-based methods have recently become increasingly popular and inspiring results have been achieved, especially for the challenging situation of LiDAR-image-based depth completion. This article systematically reviews and summarizes the works related to the topic of depth completion in terms of input modalities, data fusion strategies, loss functions, and experimental settings, especially for the key techniques proposed in deep learning-based multiple input methods. On this basis, we conclude by presenting the current status of depth completion and discussing several prospects for its future research directions. |
关键词 | Data fusion Deep learning Depth completion Loss function RGB-D and LiDAR data |
DOI | 10.1109/TNNLS.2022.3201534 |
关键词[WOS] | SINGLE IMAGE ; LIDAR DATA ; NETWORK ; FUSION ; FEATURES ; MODEL |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Science Foundation of China[62003319] ; National Science Foundation of China[42076192] ; National Science Foundation of China[62076026] ; Shandong Provincial Natural Science Foundation[ZR2020QF075] |
项目资助者 | National Science Foundation of China ; Shandong Provincial Natural Science Foundation |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:000852238700001 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
是否为代表性论文 | 是 |
七大方向——子方向分类 | 三维视觉 |
国重实验室规划方向分类 | 环境多维感知 |
是否有论文关联数据集需要存交 | 否 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/50114 |
专题 | 中科院工业视觉智能装备工程实验室_精密感知与控制 多模态人工智能系统全国重点实验室_机器人视觉 中科院工业视觉智能装备工程实验室 |
通讯作者 | Gao, Xiang; Shen, Shuhan |
作者单位 | 1.Ocean Univ China, Coll Engn, Qingdao 266100, Peoples R China 2.Chinese Acad Sci, Inst Automat CASIA, SenseTime Res Grp, Beijing 100190, Peoples R China 3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 4.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China |
通讯作者单位 | 中国科学院自动化研究所; 模式识别国家重点实验室 |
推荐引用方式 GB/T 7714 | Xie, Zexiao,Yu, Xiaoxuan,Gao, Xiang,et al. Recent Advances in Conventional and Deep Learning-Based Depth Completion: A Survey[J]. IEEE Transactions on Neural Networks and Learning Systems,2022:1-21. |
APA | Xie, Zexiao,Yu, Xiaoxuan,Gao, Xiang,Li, Kunqian,&Shen, Shuhan.(2022).Recent Advances in Conventional and Deep Learning-Based Depth Completion: A Survey.IEEE Transactions on Neural Networks and Learning Systems,1-21. |
MLA | Xie, Zexiao,et al."Recent Advances in Conventional and Deep Learning-Based Depth Completion: A Survey".IEEE Transactions on Neural Networks and Learning Systems (2022):1-21. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论