|Place of Conferral||中国科学院自动化研究所|
|Keyword||基于图像的三维建模 航拍与地面图像融合 图像与激光数据融合|
How to improve the scene completeness and computational efficiency has always been a key challenge and desired objective in large-scale 3D scene reconstruction community, due to various adverse factors, such as camera trajectory and view angle changes, environmental illumination condition and occlusions, as well as scene structure complexity and texture variation. This thesis focuses on how to use multi-source data, in particular, aerial images, ground images, and laser point cloud, to enhance the scene completeness and speed up the reconstruction process. The main results and contributions of the thesis are four-fold:
1. For large-scale architectural scene 3D reconstruction, the models generated from ground images are usually incomplete, while the models generated from aerial images lack fine details on the building facades. To tackle this problem, a dense point cloud based aerial and ground point cloud registration method for complete modeling is proposed, which goes in a coarse-to-fine way. In order to improve the accuracy and efficiency of point cloud registration, the proposed method synthesizes aerial-view image via ground dense point cloud projection. During point cloud registration, the proposed method makes several improvements in image selection, synthesis, and matching to generate evenly distributed synthetic images with low noise level and to obtain more point match inliers. Experimental results demonstrate that by the proposed method, accurate and efficient aerial and ground model registration could be achieved.
2. There are several drawbacks of the dense point cloud projection based point cloud registration method, for example, (1) relatively low efficiency, (2) synthetic image with high noise level and inevitable missing pixels, and (3) a similarity transformation for point cloud registration is incapable of modeling the scene drifting issue occurred in image based modeling. To deal with these issues, a sparse point cloud based aerial and ground point cloud merging method is proposed. In the proposed method, the aerial-view image is synthesized from the homographies induced by the sparse mesh, and the aerial and ground point clouds are merged via bundle adjustment, which largely reduced the scene drifting problem. In addition, the proposed method filters the point match outliers between aerial and synthetic images via geometrical consistency check and geometrical model verification. Experimental results demonstrate that the proposed method performs better in point cloud merging accuracy and efficiency compared with other methods.
3. The models reconstructed from images are usually not accurate enough due to various factors, while the models generated from laser scans are of high cost and low flexibilities. In this work, an accurate and complete architectural scene modeling method by merging image and laser scans is proposed. The proposed method captures and models the scene using images at first. Based on the model generated from images, laser scanning locations are automatically planned by considering structural complexity and textural richness of the scene, and distribution of the scanning locations. Then, synthetic images are generated by projecting laser points, which are matched with the captured ones. Based on the cross-domain point matches, images and laser scans are merged by a coarse-to-fine scheme. Experimental results show that the proposed method could give accurate merging between images and laser scans.
4. Indoor scenes usually have complicated structures but texture paucity, it is hard to produce complete and accurate reconstructions by only image based modeling methods. This paper proposes a complete indoor scene modeling method using a mini drone and a robot. The proposed method uses aerial images captured by a mini drone to construct a global map, which is used to plan the moving path for robot and served as a global reference for robot localization. In order to localize the robot globally, the proposed method synthesizes ground-view image based on graph-cuts, which are then matched with the images captured by the robot on the ground. In the end, accurate and complete indoor scene models are achieved by merging aerial and ground images. Experimental results demonstrate that the proposed method is able to accurately localize the ground robot and completely model the indoor scene.
|高翔. 融合多源数据的大规模场景三维重建方法研究[D]. 中国科学院自动化研究所. 中国科学院自动化研究所,2019.|
|Files in This Item:|
|Recommend this item|
|Export to Endnote|
|Similar articles in Google Scholar|
|Similar articles in Baidu academic|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.