|Place of Conferral||北京市石景山区玉泉路19号|
|Keyword||深度学习 强化学习 目标检测 目标跟踪 关键点检测 单目视觉测量 自主对接控制 空中加油|
（4）搭建了双机器人空中加油自主对接实验验证平台，实现了对视觉检测、跟踪以及测量方法的有效验证。该平台由两台机器人和一台嵌入式设备组成。其中，两台机器人分别用来模拟空中加油自主对接过程中的加油机和受油机。嵌入式设备Nvidia Jetson TX2作为计算平台，将所提出的空中加油锥套目标检测、跟踪、位置测量和视觉控制算法部署在该嵌入式设备上。实验结果验证了上述方法的有效性。
In recent years, with the rapid development of vision-based object detection and object tracking technologies, vision-based intelligent systems are widely used in various fields. In the field of aerospace, the vision-based autonomous control system is getting more and more attention. The vision-based autonomous docking system of aerial refueling has important significance for the development of our country’s military capability. How to quickly and accurately realize drogue object detection, tracking and position measurement under the complex aerial condition, is the main problem to be solved for the autonomous docking system of aerial refueling. In this dissertation, the object detection and tracking based on deep learning and position measurement based on vision are studied for the probe-and-drogue aerial refueling. The main works and contributions of this dissertation are as follows.
(1) An object detection network based on multiple receptive fields and weakly-supervised segmentation is proposed to improve the accuracy of object detection for one-stage object detection network. The network consists of two parts: a detection branch based on multiple receptive fields module and a small-object-focusing weakly-supervised segmentation branch. In the detection branch, the multiple scale’s prediction layers of object detection are constructed using the features of different scales. At the same time, in order to improve the accuracy of object detection, the multiple receptive fields module is added in the prediction layer to focus on different spatial positions of the object and its adjacent background with different weights. In the weakly-supervised segmentation branch, this dissertation designs a small-object-focusing weakly-supervised segmentation module, which takes the segmentation of small object as the auxiliary task of object detection to improve the accuracy of small object detection. The network is trained in a multi-task’s manner with end-to-end supervised training for detection and segmentation branches. The effectiveness of the object detection network is verified on the drogue object detection dataset of aerial refueling and the general object detection dataset, respectively.
(2) A method of drogue object detection and object tracking of aerial refueling based on deep learning and reinforcement learning is proposed to overcome the difficulty of drogue object detection caused by image variations. Under the complex conditions of aerial environment (such as the interference of strong light, the disturbance of airflow, the occlusion of object, the scale’s change of object and the attitude’s change of object, etc), the image’s difference of drogue object for autonomous aerial refueling is obvious, which results in great problems to detect and track the drogue object. In order to solve the problems above, a method combining drogue object detection and object tracking is proposed in this dissertation. The object detector is designed based on YOLO architecture, which can quickly detect the drogue object and obtain the rough position of the object. The object tracker based on reinforcement learning adjusts object’s bounding box to improve the intersection over union of the tracking box and the ground truth, and then improves the accuracy of the object tracking. Finally, the proposed method for drogue object detection and object tracking of aerial refueling is verified on the ground simulation platform of aerial refueling.
(3) A monocular visual measurement method based on the model of the drogue object of aerial refueling is proposed for the real-time 3D measurement of drogue object. The method consists of two parts: a multi-task parallel deep convolution neural network for landmark detection of drogue object and a position measurement model based on double ellipses of drogue object for aerial refueling. The drogue object of aerial refueling is composed of the inner black center part, the umbrella bone and the outer umbrella cloth part. According to the geometric characteristics of the drogue object of aerial refueling, the landmark detection model is used for detecting 16 landmarks of the inner black center part of the drogue object and 32 landmarks of the outer umbrella cloth part of the drogue object. The result of drogue object’s landmark detection is used as the image feature for visual measurement. At the same time, because the projection of the inner black center part and the outer umbrella cloth part of the drogue object are circles or ellipses in the image plane, a double ellipse-based visual measurement model for aerial refueling is designed, which improves the accuracy of the position measurement of the drogue object for aerial refueling.
(4) An experimental verification platform based on two robots for autonomous docking process of aerial refueling is built, which is used to realize the effective verification of visual detection, tracking and measurement method. The platform is composed of two robots and an embedded device. The two robots are used for simulating the oil tanker and oil receiver of autonomous docking process of aerial refueling respectively. The embedded device Nvidia Jetson TX2 is used as a computing platform, and then the proposed drogue object detection and tracking of aerial refueling, position measurement method and visual control algorithm are deployed on the embedded device. The experimental results verify the effectiveness of the proposed methods.
|孙思洋. 基于深度学习的自主空中加油目标检测与跟踪研究[D]. 北京市石景山区玉泉路19号. 中国科学院大学,2020.|
|Files in This Item:|
|博士论文-孙思洋20200308最终版明（19199KB）||学位论文||开放获取||CC BY-NC-SA||Application Full Text|
|Recommend this item|
|Export to Endnote|
|Similar articles in Google Scholar|
|Similar articles in Baidu academic|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.