CASIA OpenIR  > 毕业生  > 博士学位论文
微装配机器人技能学习方法及应用研究
秦方博
Subtype博士
Thesis Advisor徐德
2019-05-29
Degree Grantor中国科学院大学
Place of Conferral中国科学院自动化研究所
Degree Discipline控制理论与控制工程
Keyword技能学习 微装配 显微视觉 图像特征提取 柔顺控制
Abstract

微装配机器人是实现微小尺寸零件高精度装配的先进系统,综合了精密感知与先进控制等核心技术。令微装配机器人具有技能学习能力,可以进一步提高微装配机器人的可复用性和易用性,显著减少对人工编程开发与调试的依赖,实现机器人智能化。为了实现微装配机器人的精密装配技能学习,本论文针对一系列相关问题展开了研究,主要包括显微视觉测量系统、显微图像特征提取、柔顺插入中的接触刚度学习以及微装配技能示教学习。论文的主要研究工作与贡献如下:

第一,为了实现大测量范围高精度的三维零件位姿感知,提出了基于多路远心显微视觉的位姿测量方法。在成像模型中考虑了相机的线性运动,使测量范围可以超过显微镜头浅景深的限制,同时保持高测量精度。基于图像雅克比矩阵,可将多路相机的显微图像中提取到的点特征和线特征分别用于估计三维笛卡尔空间中的向量和方向。对所提出方法进行实验验证,在显微相机景深为430μm时,位姿测量的位置和方向上的均方根误差分别为3μm0.05º,而测量范围分别约为5000μm20°。为了提高测量系统的效率,分析多路相机之间的仿射极线约束与聚焦平面相交约束,并分别应用于图像特征提取和多相机自动对焦。提出了基于圆球目标的显微视觉系统的标定方法,具有低成本和小体积的优点。

第二,为了提高视觉感知模块在不同技能任务中的灵活复用能力,提出了一种感兴趣轮廓基元提取方法。在该方法中用户可以针对新零件创建轮廓基元模板,然后基于匹配比例和带权重的对比度分数来搜索图像目标,在目标匹配的粗定位基础上基于局部法向导数提取感兴趣轮廓基元上的边缘点,最后利用提取到的边缘点,通过形状拟合得到精确的几何特征,并输出轮廓锐度用于聚焦评价。该方法不仅可以针对不同零件图像进行灵活配置,而且在多种非理想成像条件下具有良好的实时性和鲁棒性。另外,为了实现小样本学习的零件图像分割,设计了一种基于卷积神经网络的分割模型,通过结合无监督特征学习和有监督分割学习,实现了仅需要少量有标注图像的小样本分割学习。分割结果可作为特征提取的搜索区域,可进一步提高特征提取的鲁棒性和实时性。

第三,提出了一种异向性刚度接触模型学习的方法,可应用于柔顺插入控制,提高插入技能对不同接触刚度的适应性。基于历史装配过程中的数据记录,利用聚类算法判断是否存在多类接触刚度,然后用支持向量机分类器来学习接触方向与刚度之间的关系,最后将异向性刚度模型用于径向力控制器中,使机械手可以根据接触刚度变化来调整增益。实验表明,该方法改善了存在刚度异向性时插入过程的柔顺性和效率,径向力可被约束在100mN以内。

第四,提出了一种基于显微视觉和力反馈的微装配技能示教学习方法,首次实现了毫米级零件微米级精度的微装配技能学习与执行,且仅需要若干次示教。针对具有多步骤流程和多种底层控制器的微装配机器人,设计了一种可将技能分割为若干动作并记录各动作控制器编号的技能学习与执行框架。提出了图像特征引导的运动和力约束的运动两种动作类,分别与基于图像特征的视觉伺服控制器和基于微力反馈的柔顺控制器这两种底层控制器结合使用。基于高斯混合模型的动态系统使动作不仅具有非线性表达能力,而且具有全局渐进稳定性。通过获取示教数据并执行学习算法,可针对不同的技能将动作类实例化为不同的特定动作,用于引导底层控制器的动作。实验表明所提出的技能示教学习方法,可灵活部署于不同的任务,并对起始条件变化、过程扰动具有鲁棒性,且具有一定可解释性。

最后,对本文的研究成果进行总结,并给出对未来的研究工作的展望。

Other Abstract

Microassembly robot is an advanced system for high precision assembly of micro-sized objects, which integrates precision perception and advanced control techniques. The reusability and flexibility of microassembly robot can be improved by implementing microassembly skill learning, which also alleviates the repeated program developing and tuning workload and realizes intelligentized robot. This paper mainly deals with the relevant problems including microscopic vision measurement, microscopic image feature extraction, compliant insertion control with contact stiffness learning and learning microassembly skill from demonstration. The main work and contributions are given as follows.

First, to realize pose perception of 3 dimensional (3D) objects with high precision and larger range, the multiple telecentric microscopic camera based pose measurement approach is proposed. In the camera model the linear focusing motion of the camera is considered, so that the measurement range can exceed the shallow depth of field and the measurement accuracy is not degraded. Based on image Jacobian matrix, the point and line image features in multiple views are used to estimate the vector and direction in 3D Cartesian space. In the experiments, the proposed method presented the measurement accuracy of 3μm in position and 0.05º in direction, while the measurement ranges were 5000μm in position and 20° in directionTo increase the measurement efficiency, the affine epipolar constraint and focus plane intersection constraint are analyzed, and used in image feature extraction and multi-camera autofocus, respectively. The calibration of the microscopic vision system is based on ball object, which has the advantage of low-cost and small size.

Second, to increase the flexible reusability of the visual perception module in different skills, the contour primitives of interest extraction method is proposed. This method allows user to reconfigure the template of contour primitives for new object. The object pose in image is searched based on the criteria of shape matching ratio and weighted contrast score. With the matched object pose, the edge points on the contour primitives are extracted according to local normal direction derivative. Finally, the edge points are used to fit the exact geometric features, and also to calculate the contour sharpness as focus measure. This method was robust and real-time in several unsatisfying imaging condition. In addition, to realize object image segmentation learning with few labeled samples, a convolutional neural networks based segmentation model is designed by combining unsupervised feature learning and supervised segmentation learning. The segmentation result can be used as the limited search region in feature extraction, to further increase the robustness and real-time performance of image feature extraction.

Third, an anisotropic stiffness model learning method is proposed for compliant insertion control, aiming to improve the insertion skill’s adaptive ability to anisotropic stiffness. Using the data collected from the historical assembly processes, the clustering algorithm is utilized to identify whether multiple classes of stiffness exist. Then the support vector machine classifier is used to learn the relationship between contact direction and stiffness. Finally, the learned anisotropic stiffness model is applied in the radial force controller, to adjust the controller’s gain according to identified contact direction. The experiments showed that the proposed method provided better compliance and efficiency when stiffness anisotropy existed, and the radial force is limited within 100mN.

Fourth, a skill learning approach is proposed for precision assembly robot, aiming to realize efficient skill transfer from teacher to robot through several demonstrations. The frame-work is designed considering that a skill has multiple controllers and procedures. A complex skill is segmented to an action sequence according to the changes of the teacher’s selective attention settings on the multiple system variables. The learning of each action is to select a predefined action class and learn its key parameters from the demonstration data. The action sequence forms a finite state machine. To execute an action, firstly the action instance is generated from the action class and the learned parameters. Then at each time step the action state is updated by the Gaussian mixture model based dynamical system and is sent to the lower level controller as the reference signal, so that the action state evolves towards the target with a specified motion pattern. In this work, the action classes of image feature guided motion and the force constrained motion are proposed based on the multi-camera microscopic vision and 3-dimensional force feedback, respectively, which can be reused in different skills. The proposed approach was validated by the two experiments of the sleeve-cavity assembly and the coil-cylinder assembly.

Finally, the conclusion and future outlook of the research work are presented.

Pages154
Language中文
Document Type学位论文
Identifierhttp://ir.ia.ac.cn/handle/173211/23859
Collection毕业生_博士学位论文
Corresponding Author秦方博
Recommended Citation
GB/T 7714
秦方博. 微装配机器人技能学习方法及应用研究[D]. 中国科学院自动化研究所. 中国科学院大学,2019.
Files in This Item:
File Name/Size DocType Version Access License
博士学位论文_秦方博0615.pdf(7154KB)学位论文 暂不开放CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[秦方博]'s Articles
Baidu academic
Similar articles in Baidu academic
[秦方博]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[秦方博]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.