CASIA OpenIR  > 复杂系统认知与决策实验室  > 先进机器人
机器人自主柔性作业中的视觉检测与定位方法研究
黄一锟
2020-05
页数78
学位类型硕士
中文摘要

机器人作业在众多制造领域被广泛使用,如电子、医疗、汽车、船舶制造等。由于作业环境的非结构和多变、作业对象的多样性和作业工艺的复杂性,传统的以示教再现为代表的机器人作业技术已经难满足要求,亟待研究对环境-对象-工业具有高适应性和高智能性的机器人作业技术,也就自主柔性作业机器人技术及其系统。

视觉检测与定位是机器人自主柔性作业的核心。研究人员针对面向机器人作业任务的视觉检测和定位提出了多种算法,但由于环境复杂、对象多变等因素,都没能起到很好的效果,例如冰箱压机舱管状焊点。本文针对机器人柔性作业中的视觉检测问题,以管状焊点检测为具体实例开展研究,主要的工作有:

(1) 针对目前该领域缺乏成熟的平台系统和标准数据集,研发了面向压机舱焊点检漏的自主柔性作业机器人系统并搭建了相应的视觉检测数据集。针对管状焊点检测问题搭建的机器人自主柔性作业平台由6部分构成,分别是光源、相机和镜头、高精度六自由度机械臂、高精度一维力传感器、吸枪检漏仪、工控机及显示器。并由平台收集了699张冰箱压缩机仓图片,手动标注了超过6000个管状焊点目标作为视觉检测数据集。最后针对焊点检测问题,提出了一个更加适合于工业严苛条件的多目标检测评价方法,并证明了该评价方法与精度的关系。

(2) 针对管状焊点目标小、对比度低、形状多变,并且焊点周围环境复杂等特点,提出了一个新的基于深度学习的面向小目标实时视觉检测与定位方法,名为Small-Object YOLOv3(SOYOLOv3)。对于满足实际应用要求的视觉检测与定位算法,需要兼顾精度、速度、模型大小三个方面。相较于目前主流的实时视觉检测方法YOLOv3,SOYOLOv3在管状焊点检测数据集上表现突出,检测时间上快4.43\%,平均精度高1.37\%,并且模型压缩率为92.77\%(YOLOv3:61.52M,SOYOLOv3:4.45M)。

(3) 针对人类先验知识难以和深度学习结合的问题,我们提出了一个新的基于深度学习和先验知识模型的视觉检测与定位。在焊点检测这一问题上在多个深度模型上进行了实验,对比了加先验知识和不加先验知识的的检测结果。实验表明,先验知识应用在不同模型均有0.2\%到0.5\%精度和超过4\%在多目标检测评价方法上的提升。

最后,总结了本文的研究成果,并对后续的研究工作进行分析和展望。

英文摘要

Robot technologies are widely used in many fields, such as electronics, medical, military, automobile and ship industry. Industrial intelligence has being studied by manufacturing companies, robot manufacturers, research institutions and universities currently. The traditional robot technology represented by teaching reproduction has been difficult to meet the current needs. Therefore, the robot's autonomous flexible manipulation method has been widely used in industrial production.

Visual inspection and detection are the core of robotic autonomous flexible manipulation. Researchers have proposed a variety of detection systems and methods for different problems for industrial vision scenarios. Focusing on the field of solder joint inspection, especially the small and variable tubular solder joints, there is still an urgent application problem to be solved, which is lack of detection system, lack of standard datasets, small object, complex surroundings of solder joints and other factors. This article focuses on the visual inspection problem in robot autonomous flexible manipulation. we take tubular solder joint inspection as a specific example of the robot autonomous flexible manipulation to research. The main contributions are summarized as follows.

(1) In view of the lack of detection system and standard datasets in this field, we construct  the visual inspection system for tubular solder joints and the dataset for tubular solder joint inspections. The tubular solder joint visual inspection system is composed of 6 parts, which are light source, camera and lens, high-precision six-degree-of-freedom mechanical arm, high-precision one-dimensional force sensor, suction gun leak detector, industrial computer and display. The tubular solder joint dataset collected 699 pictures of the refrigerator compressor compartment, and manually marked more than 6000 tubular solder joint objects. Focusing on the  tubular solder joint detection task, a multi-object detection evaluation method (MODE) which is more suitable for severe industrial conditions is proposed, and the relationship between the MODE and Precision is proved.

(2) Focusing on the small tubular solder joint detection, we propose a novel detection method based on deep learning, called Small-Object YOLOv3(SOYOLOv3). A vision detection method should make a better tradeoff in Precision, time-consuming and the number of model parameter. Compared with the current real-time visual detection method YOLOv3, SOYOLOv3 is outstanding in the dataset of tubular solder joints. SOYOLOv3 is faster 4.43\% , stronger 1.37\% in AP and more light weight 92.77\% (YOLOv3:61.52M vs SOYOLOv3:4.45M).

(3) Aiming at the problem that human prior-knowledge is difficult to combine with deep learning methods, we propose a novel detection framework based on deep learning with prior-knowledge model. We compare many detection method  whether they use prior knowledge model or not. Experiments show that the application of prior knowledge in different models has improved from 0.14\% to 0.5\% AP and more than 4 \% improvement in multi-target detection and evaluation methods.

Finally, the research results of this paper are summarized, and the follow-up research work is analyzed and prospected.

关键词机器人自主柔性作业 目标检测 管状焊点检测 先验知识模型
语种中文
资助项目National Natural Science Foundation of China[61771471] ; National Natural Science Foundation (NSFC) of China[U1613213] ; National Natural Science Foundation of China[91748131]
七大方向——子方向分类智能机器人
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/39603
专题复杂系统认知与决策实验室_先进机器人
推荐引用方式
GB/T 7714
黄一锟. 机器人自主柔性作业中的视觉检测与定位方法研究[D]. 中国科学院自动化研究所. 中国科学院自动化研究所,2020.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
机器人自主柔性作业中的视觉检测与定位方法(7705KB)学位论文 开放获取CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[黄一锟]的文章
百度学术
百度学术中相似的文章
[黄一锟]的文章
必应学术
必应学术中相似的文章
[黄一锟]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。