CASIA OpenIR  > 毕业生  > 硕士学位论文
机器人操作目标的点云建模及位姿估计
于灏
Subtype硕士
Thesis Advisor王硕
2019-05-17
Degree Grantor中国科学院自动化研究所
Place of Conferral中国科学院自动化研究所
Degree Discipline控制工程
Keyword机器人控制 目标识别 语义分割 点云配准 位姿估计 机器人抓取
Abstract

机器人广泛应用于工业、医疗、航天和家庭服务等领域。目标的精准可靠拾放是衡量智能机器人操作能力的重要指标之一。目前,机器人在自主精准拾放物体方面还远远不如人一样灵活自如。当环境复杂、物体种类多样且摆放姿态任意时,机器人的自主精准抓取操作就面临着巨大挑战。因此,本文以物体的点云建模和位姿估计方法研究作为重点,为提高机器人的自主精准操作能力提供技术支撑。

本文的工作主要包括以下几个方面:

(1) 针对目标物体的三维建模问题,本文提出了一种基于改进SIFT-ICP算法的物体点云建模方法。操作者利用深度相机围绕目标物体采集30帧左右的彩色图像和深度图像,依据所提算法即可构建该物体的点云模型。基于该方法建立的多种物体点云模型可为机器人抓取目标的识别和位姿估计奠定基础。

(2) 针对复杂背景中任意摆放物体的位姿估计问题,本文提出了一种基于FCN语义分割结合快速全局配准算法的目标位姿估计方法,通过FCN语义分割生成目标语义点云,采用快速全局配准算法将离线建立的目标点云模型与目标语义点云进行配准,实现目标位姿估计。然后,将该方法与基于点云全局优化配准算法的目标位姿估计方法进行比较,通过实验分析了两种方法的性能。

(3) 针对机器人的目标拾放任务,本文设计了一种基于目标位姿估计的机器人抓取作业系统,利用目标物体的点云模型及其位姿估计结果生成机器人的抓取姿态控制策略,并完成目标抓取。通过真实环境中的抓取实验验证了本文所给出的方法和所设计的系统是有效的。

最后,对本文进行了总结,并指出了需要进一步开展的研究工作。

Other Abstract

Robots are widely used in industrial, medical, aerospace and home services. The precise and reliable pick-and-place of the target is one of the important indicators to measure the operational capability of the intelligent robot. At present, robots are far less flexible than humans in their own ability to accurately pick and place objects. When the environment is complex, the objects are diverse, and the posture is arbitrary, the robot's self-precision and grasping operation faces enormous challenges. Therefore, this paper focuses on the research of point cloud modeling and pose estimation method of objects, and provides technical support for improving the robot's independent precision operation ability.

The work of this paper mainly includes the following aspects:

(1) Aiming at the problem of 3D modeling of target objects, this paper proposes an object point cloud modeling method based on improved SIFT-ICP algorithm. The operator uses a depth camera to collect about 30 frames of color image and depth image around the target object, and constructs a point cloud model of the object according to the proposed algorithm. The multi-object point cloud model established based on this method can lay a foundation for the recognition and pose estimation of robotic grasping targets.

(2) As for the pose estimation problem of arbitrary objects in complex background, this paper proposes a target pose estimation method based on FCN semantic segmentation combined with fast global registration algorithm. The target semantic point cloud is generated by FCN semantic segmentation. The fast global registration algorithm is used to register the target point cloud model established offline with the target semantic point cloud to achieve target pose estimation. Then, the method is compared with the target pose estimation method based on point cloud global optimization registration algorithm, and the performance of the two methods is analyzed experimentally.

(3) In order to complete the robotic target pick-and-place task, this paper designs a robotic grasping operation system based on target pose estimation, which uses the point cloud model and pose estimation result of the target object to generate the robotic grasping gesture control strategy, and completes the target grasping. It is verified by the grasping experiments in the real environment that the method and system designed in this paper are effective.

Finally, the conclusions and future work are given.

Pages83
Language中文
Document Type学位论文
Identifierhttp://ir.ia.ac.cn/handle/173211/23854
Collection毕业生_硕士学位论文
Recommended Citation
GB/T 7714
于灏. 机器人操作目标的点云建模及位姿估计[D]. 中国科学院自动化研究所. 中国科学院自动化研究所,2019.
Files in This Item:
File Name/Size DocType Version Access License
机器人操作目标的点云建模及位姿估计-于灏(3557KB)学位论文 暂不开放CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[于灏]'s Articles
Baidu academic
Similar articles in Baidu academic
[于灏]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[于灏]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.