无人车图像-激光联合构图与联合标定
王宝宇
2021-05
页数84
学位类型硕士
中文摘要

  全自动、低成本地构建大规模场景精准完整的三维地图是无人驾驶、增强现实、服务机器人等领域的共性需求。在无人驾驶领域,三维地图的构建通常需要同时依赖多线激光雷达Light Detection and Ranging, LiDAR、多朝向相机、高精度组合惯导等设备,面临设备成本高、不同设备之间难以精确同步、设备之间相对位姿标定复杂等瓶颈问题。针对这些问题,本文在“图像与激光联合全局式稀疏重建”、“基于点云配准的图像与激光迭代位姿标定”、“基于点云特征匹配的图像与激光联合构图与位姿标定”三方面开展了研究工作,构建了图像三维重建为基础的无人车图像-激光联合构图与联合标定技术方案,可以有效降低三维地图构建过程对高精度惯导设备的依赖,同时实现了图像-激光联合标定和联合构图的全自动化。本文构建的技术系统在实际无人车场景中开展了实验验证,验证了本文提出的一系列算法的有效性,为无人车图像-激光联合标定和联合构图的落地应用提供了技术途径。本文开展的主要工作如下:

1为减低现有无人车三维地图构建对高精度组合惯导设备的依赖,本文提出一种相机与激光雷达融合稀疏三维重建的方法。该方法首先将每一个采集位置的激光点云转换到图像坐标系,并投影为稀疏深度图;之后通过深度补全策略对稀疏深度图进行稠密化,获取图像中逐像素的粗略深度;之后根据相邻位置图像特征匹配结果,以及特征点对应的局部深度,获得相邻相机之间的3D-3D点对应;基于3D-3D对应点集合,使用基于RANSAC的相对位姿估计方法,可以获取相机之间相对旋转和相对平移;最后,使用全局式旋转平均和位移平均方法来获取相机全局位姿,并通过三角化和捆绑调整进行三维稀疏地图构建。实验结果表明,本文提出的方法能够在不依赖高精度惯导初值的前提下,高效准确的获取场景稀疏三维地图。

2上述方法中需要预先获取相机与激光之间精确的相对位姿,以及精确的相机内参,为此,本文进一步提出了一种基于点云配准的图像与激光迭代位姿标定方法,具有全自动计算、不依赖任何人工标定物、充分利用场景全局信息、标定结果精确等优势。该方法首先针对无人车可能存在的采集设备不同步问题,设计了适用于传感器标定的车辆运动方式;之后使用基于广义相机模型约束的稀疏重建方法获取场景稀疏三维地图;之后采用迭代ICP配准的方式,将单站激光雷达点云依次配准到稀疏地图中,获取单站激光雷达和相机之间的相对位姿,并通过位姿平均消除误差;这一单站ICP配准+相对位姿平均的计算流程进行若干次迭代,直到收敛。实验结果表明,本文提出的方法可以全自动的获取高质量的相机-激光相对位姿标定结果和相机内参标定结果。

3上述方法中每一站激光雷达独立与稀疏地图进行ICP配准,无法保证激光雷达之间的点云一致性,所得到的结果为近似收敛值。为此,本文进一步将激光雷达信息纳入图像三维重建流程,通过构造激光点和图像稀疏重建点之间的几何约束,将激光-激光、激光-图像、图像-图像这三类约束纳入统一优化框架,提出了一种具有完备数学表达的图像与激光联合构图与位姿标定方法。该方法首先利用所有图像进行稀疏三维重建;之后提取图像稀疏重建点和单站激光点中的几何特征,包括边缘点(线特征)和平面点(平面特征);之后构建包含三维空间中的点到线距离误差、点到面距离误差,以及重投影误差在内的联合优化函数进行整体求解;这一图像稀疏重建+误差构造+联合优化的计算流程迭代进行,直至所有传感器位姿稳定。试验结果表明,本文算法可以获得精确的相机-激光联合标定结果,同时也可以直接用于大规模道路的图像-激光联合构图。

英文摘要

 

Automatic and low-cost method for constructing an accurate and complete 3D map of large-scale scene is the common demand of unmanned driving, augmented reality, service robot, and other fields. In the field of unmanned driving, the construction of 3D map usually depends on multi line lidar, multi orientation camera, high-precision inertial navigation and other equipment at the same time, which faces the bottleneck problems such as high cost of equipment, difficulty in accurate synchronization between different equipment, and complex relative pose calibration between equipment. In order to solve these problems, this paper has carried out research work in three aspects: joint global sparse reconstruction of image and lidar, iterative pose calibration of image and lidar based on point cloud registration, and joint composition and pose calibration of image and lidar based on point cloud feature matching. This scheme can effectively reduce the dependence of 3D map construction on the high-precision inertial navigation system, and build the full automation of image laser joint calibration and joint composition. The technology system constructed in this paper is verified by experiments in the actual unmanned vehicle scene, which verifies the effectiveness of a series of algorithms proposed in this paper, and provides a technical way for the landing application of unmanned vehicle image laser joint calibration and joint composition. The main work of this paper is as follows:

1) The existing self-driving cars depend on a high-precision inertial navigation system for 3D map construction. However, the inertial navigation system is costly. In this paper, we propose a sparse 3D reconstruction method based on camera and lidar fusion. Firstly, the lidar point cloud of each acquisition position is transformed into the image coordinate and build a sparse depth map. Secondly, the sparse depth map is densified by a depth completion strategy to obtain the coarse depth. Thirdly, we get the 3D-3D point pairs between continuous cameras, and then the relative rotation and translation are obtained by a RANSAC algorithm. Finally, the global rotation average and translation average methods are used to obtain the global pose of the camera, and the 3D sparse map is constructed by triangulation and bundle adjustment. The experimental results show that the proposed method can obtain an accurate sparse 3D map without relying on the high-precision poses of INS.

2) The above method depends on accurate camera internal parameters and Camera-LiDAR extrinsic parameters, which need to be calibrated offline early. Therefore, in this section, we propose an iterative calibration method based on image and lidar point cloud match, which has the advantages of full-automatic, independent of any manual calibration object, making full use of the global information of the scene, accurate calibration results and so on. Firstly, in order to deal with asynchronous of the self-driving car's sensors, we propose a novel car motion mode for data caption for sensors calibration. Secondly, a sparse reconstruction method based on generalized camera model constraint is used to obtain the sparse 3D map of the scene. Thirdly, an iterative ICP registration method is used to match a single station lidar point cloud into the sparse map in order to obtain the relative pose of the camera-lidar. And then, we compute the average of the obtained relative poses. The calculation process of single station ICP registration + average of the relative pose is iterated several times until convergence. Experimental results show that the proposed method can automatically obtain high-quality camera-lidar relative pose calibration results and camera internal parameter calibration.

3) In the above method, each single station lidar point cloud is matched with the sparse image map independently, which can not guarantee the consistency of the continuous lidar point clouds. And the calibration results of camera internal parameters and Camera-LiDAR extrinsic parameters are inaccurate. In this section, the lidar information is incorporated into the process of image 3D reconstruction. By constructing the geometric constraints between the lidar point cloud and the sparse reconstruction point cloud, we take the lidar to lidar, lidar to image, and image to image as constraints of the unified optimization framework, then a method of camera-lidar joint calibration and pose estimation is proposed. In particular, the proposed method with complete mathematical theory. Firstly, all images are used for sparse 3D reconstruction. Secondly, the geometric features of sparse reconstruction point cloud and single station lidar point cloud are extracted, including edge points (line features) and planar points (plane features). Next, the joint optimization function, including point-to-line distance, point-to-plane distance, and re-projection constraints, are constructed to solve the whole system. Finally, the process of sparse image reconstruction, constraints construction, and joint optimization is carried out iteratively until all calibrations are stable. The experimental results show that the proposed algorithm can also be used to obtain accurate calibration results, and it can be used to get a large-scale 3D map in the road scene.

关键词稀疏重建,激光雷达
语种中文
七大方向——子方向分类三维视觉
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/44903
专题多模态人工智能系统全国重点实验室_机器人视觉
推荐引用方式
GB/T 7714
王宝宇. 无人车图像-激光联合构图与联合标定[D]. 智能化大厦第四会议室. 中国科学院自动化研究所,2021.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
王宝宇毕业论文.pdf(10252KB)学位论文 开放获取CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[王宝宇]的文章
百度学术
百度学术中相似的文章
[王宝宇]的文章
必应学术
必应学术中相似的文章
[王宝宇]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。