Knowledge Commons of Institute of Automation,CAS
IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation | |
Feng, Cheng1; Chen, Zhen1,2; Zhang, Congxuan2,3; Hu, Weiming4![]() ![]() | |
发表期刊 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
![]() |
ISSN | 1051-8215 |
2024 | |
卷号 | 34期号:1页码:329-341 |
通讯作者 | Chen, Zhen(dr_chenzhen@163.com) ; Zhang, Congxuan(zcxdsg@163.com) |
摘要 | Self-supervised monocular depth estimation has been a challenging task in computer vision for a long time, and it relies on only monocular or stereo video for its supervision. To address the challenge, we propose a novel multi-frame monocular depth estimation method called IterDepth, which is based on an iterative residual refinement network. IterDepth extracts depth features from consecutive frames and computes a 3D cost volume measuring the difference between current and previous features transformed by PoseCNN (pose estimation convolutional neural network). We reformulate depth prediction as a residual learning problem, revamping the dominating depth regression to enable high-accuracy multi-frame monocular depth estimation. Specifically, we design a gated recurrent depth fusion unit that seamlessly blends depth features from the cost volume, image features, and the depth prediction. The unit updates the hidden states and refines the depth map through iterative refinement, achieving more accurate predictions than existing methods. Our experiments on the KITTI dataset demonstrate that IterDepth is 7 x faster in terms of FPS (frames per second) than the recent state-of-the-art DepthFormer model with competitive performance. We also test IterDepth on the Cityscapes dataset to showcase its generalization capability in other real-world environments. Moreover, IterDepth can balance accuracy and computational efficiency by adjusting the number of refinement iterations and performs competitively with other CNN-based monocular depth estimation approaches. Source code is available at https://github.com/PCwenyue/IterDepth-TCSVT. |
关键词 | Estimation Iterative methods Cameras Task analysis Feature extraction Decoding Training Monocular depth estimation iterative refinement self-supervised learning deep learning |
DOI | 10.1109/TCSVT.2023.3284479 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key Research and Development Program of China |
项目资助者 | National Key Research and Development Program of China |
WOS研究方向 | Engineering |
WOS类目 | Engineering, Electrical & Electronic |
WOS记录号 | WOS:001138814400027 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/55565 |
专题 | 多模态人工智能系统全国重点实验室 |
通讯作者 | Chen, Zhen; Zhang, Congxuan |
作者单位 | 1.Beihang Univ, Sch Instrumentat & Optoelect Engn, Beijing 100191, Peoples R China 2.Nanchang Hangkong Univ, Key Lab Nondestruct Testing, Minist Educ, Nanchang 330063, Peoples R China 3.Nanchang Hangkong Univ, Sch Measuring & Opt Engn, Minist Educ, Nanchang 330063, Peoples R China 4.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 5.Nanchang Hangkong Univ, Sch Measuring & Opt Engn, Nanchang 330063, Peoples R China |
推荐引用方式 GB/T 7714 | Feng, Cheng,Chen, Zhen,Zhang, Congxuan,et al. IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2024,34(1):329-341. |
APA | Feng, Cheng,Chen, Zhen,Zhang, Congxuan,Hu, Weiming,Li, Bing,&Lu, Feng.(2024).IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,34(1),329-341. |
MLA | Feng, Cheng,et al."IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34.1(2024):329-341. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论