CASIA OpenIR  > 毕业生  > 博士学位论文
基于图像直线信息的结构化场景三维建模
其他题名Structured 3D Scene Modeling via Image Line Information
傅康平
学位类型工学博士
导师胡占义
2015-05-23
学位授予单位中国科学院大学
学位授予地点中国科学院自动化研究所
学位专业模式识别与智能系统
关键词多视图直线匹配 三维建模 三维直线建模 三维平面建模 能量优化 Multiple View Line Matching 3d Modeling 3d Line Modeling 3d Planar Modeling Energy Optimization
摘要基于图像的三维重建中,三维点云是最为基础的一种模型。而对于直线、平面较为丰富的结构化场景,三维直线模型和三维平面模型可以更为有效地表达其中的结构信息。本文对结构化场景的三维直线建模和三维平面建模问题进行了研究,提出了一种基于直线信息的三维建模方法。该方法以现有三维点云重建算法为基础,可以高效地对结构化场景进行建模。本文主要工作包括以下几方面: 1. 提出了一种基于多视立体视觉的多视图直线匹配方法。该方法首先在多视立体视觉所获得的三维点云的基础上,通过构建三维点与图像直线之间的对应关系,建立一种图像直线之间的相似性度量。然后基于此度量,构建了图像直线之间的相似性矩阵,并利用图谱分析方法对图像直线进行聚类分析,有效提高了多视图直线匹配的可靠性和效率。 2. 提出了一种针对结构化场景的直线匹配扩散和三维直线建模方法。该方法利用已经获得的多视图直线匹配,对场景中点云缺失区域内的直线结构进行匹配扩散。匹配扩散过程利用了结构化场景中平面结构丰富的特点,在已匹配直线的邻域内寻找可能共面的直线,同时引入极几何约束、消隐点约束等,对直线进行了过滤和匹配。在此基础上,利用直线匹配和三维点云模型求取空间直线对应的空间点集,并对该点集进行过滤和拟合,进一步提高了空间直线的可靠性和精度。 3. 提出了一种针对结构化场景三维点云的主平面拟合方法。该方法以场景的三维点云和三维直线模型为基础,利用直线信息获得可靠的候选平面和平面间的约束。在能量优化框架下,通过构造合适的能量项,优化求取三维点云中的主平面。该算法利用直线信息提升了平面拟合效率,同时克服了传统基于点的平面拟合方法在平面边缘及细节平面处精度较低的不足。 4. 提出了一种基于三维点云分割的结构化场景三维平面建模方法。该方法在已经获得的主平面基础上,对原有点云中的缺失区域进行推断和补全。首先,利用点云分割方法将主平面对应的三维点集分割成若干区域,并产生候选平面集合。然后结合图像信息,通过能量优化的方法对各区域的深度进行推断,获得更为准确的平面结构。最后,结合图像分割确定各区域的边界,从而获得完整的三维平面模型。
其他摘要3D points are basic models in image-based modeling, however, for structured scenes where lines and planes are abundant, 3D line models and 3D planar models are more expressive for scene structure. In this work, 3D line and planar modelings are studied, and several efficient 3D modeling methods are proposed based on reconstructed 3D points. The main contributions include: 1. An MVS-based multi-view line matching method is proposed. The proposed method firstly establishes correspondences between image lines and the 3D points reconstructed from MVS algorithms. Based on these correspondences, a similarity measure for image lines is introduced. Under this similarity measure, image lines are clustered through spectral graph analysis, and final multi-view line matches are obtained. With the information provided by MVS points, the line matching process is efficient and the results are of high reliability. 2. A line matching propagation and 3D line modeling method is proposed. The propagation is designed to obtain more coplanar line matches within the neighborhood of existing matches. During the propagation, epipolar constraints, vanishing points, etc. are used to filter out the possible outliers. Finally, a 3D point set is computed from each image line match, followed by a filter stage to increase its reliability. The 3D line model is then built on filtered 3D point sets. 3. A principal plane fitting method for 3D point clouds of structured scenes is proposed. The method fits 3D planes from 3D point models and 3D line models. 3D line models produce reliable candidate planes of the scenes, as well as pairwise constraints between candidate planes. 3D planes are then fitted from 3D points under an energy optimization scheme. The introduction of line information could enhance the efficiency of plane fitting, and compared with conventional plane fitting methods, the accuracy at small regions and plane conjunctions is significantly improved due to the constraints provided by lines. 4. A 3D planar modeling method is proposed. Based on the extracted planes, the method further infers and completes planar structures that are largely missed in 3D point clouds. Firstly, each extracted 3D plane is segmented into smaller regions. By incorporating image consistency, the 3D position of the regions is further inferred via energy optimization to obtain more accurate plane structures. Finally, boundaries of the generated planes are determined based on image segmentation before t...
其他标识符201118014628037
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/6677
专题毕业生_博士学位论文
推荐引用方式
GB/T 7714
傅康平. 基于图像直线信息的结构化场景三维建模[D]. 中国科学院自动化研究所. 中国科学院大学,2015.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CASIA_20111801462803(5299KB) 暂不开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[傅康平]的文章
百度学术
百度学术中相似的文章
[傅康平]的文章
必应学术
必应学术中相似的文章
[傅康平]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。