Other Abstract | 3D modeling for indoor and outdoor urban scenes is an important part of 3D computer vision. Among them, dense point cloud can usually be obtained by lidar scanning or visual reconstruction. However, due to the influence of factors such as device accuracy, illumination, and occlusion, the obtained dense point cloud often contains some outliers, noise and missing. In addition, point cloud is a relatively redundant expression and lacks the geometric topology relationship between points, which is not conducive to visual rendering and editing. In view of the above problems, this paper focuses on the 3D structural modeling of indoor and outdoor urban scenes. By embedding various high-level semantic information into the 3D reconstruction process with clear geometric meaning, it realizes the abstraction from dense point cloud to dense mesh to 3D vectorized model step by step. The main work is as follows:
(1) Distributed surface reconstruction from large-scale point cloud. Traditional point cloud meshing algorithms usually use the geometric or visibility information of the point cloud to define an optimization function and solve it globally to obtain a dense mesh model. However, the global solution of the method makes it difficult to scale to large-scale scenarios. Thus, this paper proposes an effective segmentation-local reconstruction fusion strategy to solve the resource bottleneck problem of large-scale point cloud reconstruction. The method has two technical innovations: first, an iterative adaptive partition strategy is proposed to divide the scene into multiple chunks with overlapping boundaries and ensure the better reconstruction stability of the chunk boundaries. Second, an effective hole-filling strategy based on patch centrality is proposed to finally obtain as complete a surface model as possible. Experiments on multiple large-scale scene datasets demonstrate the effectiveness of the method.
(2) LOD2 vectorized modeling for urban buildings. Urban buildings usually have strong geometric characteristics and regularity, and the use of dense triangles is not only highly redundant, but also lacks structural information expression for objects. Thus, starting from dense texture meshes, this paper designs a fully automated LOD2 vectorized modeling system for urban buildings through scene semantic segmentation and roof outline extraction and optimization from multi-source images. The system breaks through the limitations of traditional methods on the Manhattan world assumption in the scene, and makes full use of the features of 2D images and the geometric structure of 3D models. Experiments show that the method can efficiently reconstruct city-level scenes in parallel with high quality.
(3) LOD2 vectorized modeling for indoor scenes. Compared with urban buildings, indoor scenes have more complex structures and contain more cluttered obstacles, making indoor vectorized modeling more challenging. To this end, this paper constructs a fully automatic indoor LOD2 vectorized modeling system. The core idea behind this system is to transform a 3D modeling problem into multiple 2D problems. Each subproblem is organized into a mathematical problem that can be optimized globally. After solving the sub-problems, the final indoor vectorized model can be obtained. This method can not only deal with small home scenes, but also large scenes with different complexity and characteristics, such as underground garages and supermarkets.
(4) Indoor floorplan reconstruction unifying 2D semantics and 3D geometry. The above technical solution mainly uses the high-level semantics of the image to segment 3D data in the preprocessing stage, and only use the geometric information of the point cloud in the subsequent process. This strategy is difficult to robustly cope with severe missing or noisy point cloud. Thus, this paper fuses the plane instance inferred from the image into a geometric optimization-based modeling pipeline to enhance the focus on small planes or planes with sparse supporting points. Experiments show that the deep fusion of 2D semantics and 3D geometry can enhance the robustness of the modeling system to various scene processing. |
Edit Comment