CASIA OpenIR  > 毕业生  > 硕士学位论文
装配机器人系统中基于视觉的零件检测与抓取策略的研究
其他题名RESEARCH ON VISION-BASED DETECTION AND GRASPING IN ROBOTIC ASSEMBLY SYSTEM
李远钱
2010-06-10
学位类型工学硕士
中文摘要作为先进制造业的支撑技术和信息化社会的新兴产业,工业机器人已被广泛应用于各种制造领域。其中装配、磨削等重要生产过程是其主要应用场合,但这些生产过程的可靠完成都是以机器人准确而稳定的抓取为前提的,为此视觉系统常被引入以提高机器人在抓取过程中自主化、智能化作业的能力,但其精确性、快速性和稳定性还不能满足实际应用的要求。因此研究装配等生产过程中基于视觉的工业机器人抓取作业具有重要的学术意义和应用价值。 在基于视觉的抓取作业中,零件的检测定位与零件的抓取趋近是两个基本环节。其主要的难点和问题有:在零件检测方面,抓取任务需要视觉检测环节准确而快速地提供物体的位姿信息;在抓取策略方面,大多数方法将物体位姿信息经过视觉系统感知并转换为机器人抓取位姿的过程分解为多个标定环节,这些方法累积误差较大、易受外界干扰。 本论文正是针对以上难点和问题展开研究的,主要工作及贡献如下: 1. 为了满足抓取任务对检测环节精确性、快速性的要求,基于“由粗到细(coarse-to-fine)”的检测策略,提出了一种多零件的快速视觉检测、抓取点的准确提取和抓取方向的精确估计算法。首先,通过对候选方向预先估计,缩减了后续匹配时搜索方向范围,提高了检测速度;其次,分别利用整体模板和抓取部位模板先后两次进行匹配,准确地提取出曲柄轴及其抓取点;最后,为了正确地估计抓取方向,利用摄像机成像模型对匹配成功的候选方向进行了补偿校正。实验表明,我们的方法有较快的检测速度和较高的位姿估计精度,满足了抓取任务对检测精度的要求。 2. 为了提高机器人抓取作业的自主性和智能性,避免多个标定环节产生的累积误差,提出了一种基于有监督学习的视觉引导抓取趋近方法。首先,通过对零件边缘点进行多层次聚类,准确可靠地提取出反映零件位姿的特征点,当零件在图像中发生平移和旋转变化时,这些特征点仍然能够被准确地提取;其次,采用离线训练的方式在零件的特征点位置与抓取位姿之间建立了映射关系;最后,在线抓取时,根据此映射关系可以快速地得到手爪准确的抓取位姿。在线抓取实验证明了此方法的有效性。 3. 基于带有视觉的工业机器人平台搭建了多套零件的连续抓取和装配的系统,并在该系统上实现了对多套“轴承-曲柄轴”连续快速而稳定地抓取和装配。
英文摘要As the supporting technology of advanced manufacturing industry, industrial robots have been widely applied in various manufacturing fields, such as assembly, grinding and other important processes. But all the reliable implementations of these processes depend on the accurate and stable grasping. For this reason, vision systems are often introduced to improve the autonomous and intelligence ability of industrial robots. But the precision, speed and stability of most vision-based robot grasping systems can not meet the requirements of industrial applications. Therefore the studies on vision-based robot grasping are of important academic significance and practical value. Parts detection and reaching are two basic procedures in vision-based robot grasping tasks. The main difficulties and problems are presented as follows: In the aspect of detection, grasping tasks require the detection stage can provide accurate pose information of the objects. In the aspect of grasping strategy, most methods divide the process in which the visual information of objects poses is transformed to the grasping poses into multiple calibration processes. These methods have accumulative errors and the results are easy to be disturbed by the environment. In view of the above difficulties and problems, this article conducts a detailed study. The main contributions are as follows: 1. In order to meet the precision and speed requirements of grasping tasks, we propose an algorithm which can detect the parts fast and estimate the pose accurately based on “coarse-to-fine” detection strategy. At first, candidate orientations are detected and a large number of wrong orientations are excluded so as to increasing the detection speed. Then, the Chamfer Matching algorithm based on distance transform is used twice for searching the exact position of grasping point. At last, the imaging model of camera is used for correct the grasping orientations. The experiments show that our methods can meet the precision and speed requirements of grasping tasks. 2. In order to improve the autonomous and intelligence abilities of robot grasping and reduce the accumulative errors of several calibration processes, we propose a learning-based object reaching approach by using visual information. At first, the feature points which show the state of parts are extracted exactly by hierarchical structure clustering. The feature points are robust and invariant under translation, rotation. Secondly, the relations...
关键词机器人视觉 零件检测 零件抓取趋近 有监督学习 特征点提取 Robot Vision Parts Detection Object Reaching Supervised Learning Feature Extraction
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/7542
专题毕业生_硕士学位论文
推荐引用方式
GB/T 7714
李远钱. 装配机器人系统中基于视觉的零件检测与抓取策略的研究[D]. 中国科学院自动化研究所. 中国科学院研究生院,2010.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CASIA_20072801462806(1981KB) 暂不开放CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[李远钱]的文章
百度学术
百度学术中相似的文章
[李远钱]的文章
必应学术
必应学术中相似的文章
[李远钱]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。