CASIA OpenIR  > 毕业生  > 硕士学位论文
基于GPU的非结构室内场景实时重建
其他题名Real-time 3D Reconstruction for Unstructured Interior Scene Based on GPU
荆锐
学位类型工学硕士
导师台宪青
2012-05-24
学位授予单位中国科学院研究生院
学位授予地点中国科学院自动化研究所
学位专业控制理论与控制工程
关键词实时三维重建 Gpu体系架构 Icp算法 点云融合 Mc算法 3d Real-time Reconstruction Graphic Processing Unit Icp Algorithm Point Cloud Blending Marching Cubes
摘要同时定位与地图构建(Simultaneous Localization and Mapping, SLAM)是机器人导航中的重要研究内容,其中实时定位与三维地图构建(3D-SLAM)是目前的国内外一个重要研究方向。随着三维距离扫描仪的迅速发展,实时、稠密的三维点云获取不再成为难题。研究高效、精确地构建三维地图与机器人自定位是可行的,并具有非常重要的意义。 本文将帧速为30fps的深度传感器实时获取的深度图像转换为稠密三维点云,结合GPU超强的并行能力,经过三维点云配准、融合与精简、建网三个步骤,快速、精确的建立非结构室内场景模型。对比多组实验,对方法的有效性和可靠性进行了验证。论文主要工作包括: 1. 解决不同摄像机坐标系下点云配准问题,为提高实时性,从三个方面改进了传统的最近邻迭代(Iterative Closest Point, ICP)算法:(1)投影映射法获取匹配点对;(2)点到切平面的误差量测法,并使用线性化方法最小化误差函数;(3)结合GPU(Graphic Processing Unit)的特殊体系结构,将大量点云数据放入多个GPU线程中并行处理。实时跟踪了摄像机位姿并将不同摄像机坐标系下的点云统一到世界坐标系中,所耗时间约10ms,是CPU单线程处理的11%左右。 2. 针对本文深度传感器kinect特有的缺陷——单帧深度图像“黑洞”过多,采用包围盒算法实现点云融合与精简,包围盒中的栅格属性使用符号距离函数描述,在一定空间内,使用固定分辨率的离散点描述模型表面。动态的将新深度帧获取的点云融入到已有的三维模型中,补充模型未探测到的区域和漏洞,消耗时间与分辨率、包围盒尺寸相关,一般在50ms以内。 3. 使用基于GPU的marching cubes算法(移动立方体算法)建立三角网格,抽取符号距离等于零的等值面作为模型表面。步骤如下:(1)构建边缘索引表和三角形索引表,并利用GPU纹理存储器可快速查表的优势,获取所有与等值面相交的边缘及三角拓补关系;(2)计算等值面与边缘相交的顶点,绘制三角网格。该部分工作与点云融合中生成的顶点数目相关,耗时1s以内。
其他摘要Simultaneous Localization and 3D Mapping (we called it 3D SLAM) is widely used in robotics, but Real-time 3D SLAM is extremely challenging. As the rapid development of depth camera, 3D Point clouds with large amount of vertices could be acquired by perspective transformation of camera in real time. It is valuable and available to study a simple, precise and efficient method, for reconstructing the whole mapping and Camera Tracking. This dissertation gives a detailed analysis on 3D real-time reconstruction based on GPU with dense point clouds from a real-time depth camera. Real-world models for unstructured interior scene are built and camera position is tracked constantly in this paper, rapidly and precisely. The main research includes: 1. Registration for two point clouds from two adjacent depth frames at real-time rate. Implementing a rapid dense ICP algorithm, tracked the 3D pose of the camera and projected new point cloud into the world coordinate constantly. Based on the standard ICP algorithm, our method was accelerated threeholdly: (1) decide the point correspondences using “projective data association”; (2) minimize a linear system with “point-to-tagent plane” error metric; (3) most parts of ICP are run in parallel on a GPU (Graphic Processing Unit). It occurs in less 11 ms, 11 percent of CPU-based ICP algorithm. 2. Our depth sensor kinect has a defect, each single depth frame contains numerous “holes”. For this, we adopt bounding box algorithm to integrate and simplify multiple point clouds, with signed distance function as grid properties of bounding box. In a determinate space, the model is described by discrete points at a fixed resolution. This algorithm can add points from new depth frame into the existing 3-D model dynamically, and renew the undetected regions and holes in the model, in less than 50 ms. 3. Establishment of triangle mesh, using GPU-based marching cubes algorithm, in other words, we extract the isosurface with zero signed distance as a model surface. Steps are as follow: (1) Building the edge index table and the triangle index table, with the advantage of quick table look-up on GPU texture memory, we obtain the edge and triangle topology relationship of intersected isosurface; (2) Calculating the intersection vertex between isosurface and cube edges, with the gradient as the vertex’s normal vector, then draw a triangular mesh. This part consumes about 1s.
馆藏号XWLW1767
其他标识符200928014628007
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/7633
专题毕业生_硕士学位论文
推荐引用方式
GB/T 7714
荆锐. 基于GPU的非结构室内场景实时重建[D]. 中国科学院自动化研究所. 中国科学院研究生院,2012.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CASIA_20092801462800(3658KB) 暂不开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[荆锐]的文章
百度学术
百度学术中相似的文章
[荆锐]的文章
必应学术
必应学术中相似的文章
[荆锐]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。