|Place of Conferral||北京|
|Keyword||离线有监督学习 在线学习 视觉检测 视觉跟踪 检测器和跟踪器融合机制 自主对接控制 自主空中加油|
With the development of machine vision and image processing techniques, visual object detection, tracking and measurement methods have been widely applied. In the field of aerospace, detection, tracking and measuring for aerial targets are increasingly valued. Specially, how to accomplish object detection, tracking and measurement under aerial complex conditions becomes an important research topic. Autonomous aerial refueling based on vision can make the operation simple for the pilot during the process of aerial refueling and reduce the cost of the flight training. At the same time, the carrying capacity and flight endurance of unmanned aerial vehicles can be improved effectively with the aid of autonomous aerial refueling. It is required to develop accurate and fast object detection and tracking methods to make sure the accurate and fast inputs for a robust autonomous navigation system based on vision. This paper mainly focuses on visual detection and tracking strategies for autonomous aerial refueling under complex conditions. Main work and contributions are as follows:
(1) A method of object detection and tracking based on the object’s shape is proposed. The drogue of probe-and-drogue refueling system is composed of three parts: inner dark part, umbrella ribs and umbrella fabric. The imaging of the inner dark part is not sensitive to light changes and the contour of the inner dark part is salient in the image. The image processing rules are designed according to the shape prior knowledge of the inner dark part. A detection algorithm based on the object’s shape is proposed to efficiently detect the contour of the inner dark part and a particle filter tracking algorithm based on the object’s shape is proposed to efficiently track the contour of the inner dark part.
(2) Robust drogue detection methods based on offline supervised learning are proposed. There are some complex conditions during aerial refueling such as uneven brightness and partial saturation of the object image caused by light changes, the changes of drogue’s position and posture caused by airflow disturbance, the scale changes caused by the relative motion and the drogue partially or fully covered by the probe. Two visual detection methods based on offline supervised learning are proposed to detect the drogue effectively under complex conditions. In the first method, a SVM classifier with certain image feature is trained to detect the drogue and a kind of low-dimensional normalized robust local binary pattern feature is proposed to reduce the detection time without losing accuracy. In the second method, a deep convolutional neural network is used to predict the category and position of the object directly. The deep convolutional neural network can detect the object fast and robustly by the aid of GPU parallel computing.
(3) A robust visual tracking method based on online state-based structured SVM combined with incremental PCA is proposed. Object’s appearance changes during tracking because of light changes, occlusions, scale changes and posture changes and so on, which leads to failure in tracking. A robust tracker is designed by combining the discriminative model with the generative model. The incremental PCA is used as the generative model to update the object appearance model and the state-based structured SVM is used as the discriminative model to distinguish the object and the background. A state is used to describe the object in the image space and the object’s state is predicted directly during tracking. The object’s virtual state is proposed to combine the discriminative mode with the generative model effectively.
(4) A Robust visual detection-learning-tracking framework is proposed to combine the detector with the tracker effectively. The detector’s advantage is that it can judge globally whether there is an object in an image or not and then locate the object’s position. The detector’s disadvantage is that it needs to learn the classifier offline and the detection time is relatively long. The tracker’s advantage is that it can accomplish fast online learning and location prediction because it owns a small number of representative training samples and local search area. The tracker will fail to track the object because of the drift problem due to the incorrect update of the appearance model and the classifier under complex conditions such as serious occlusions and light changes. In this paper, the detector is used to evaluate the positive support vectors in the discriminative model of the tracker and then the unreliable positive support vectors are found in the T-classifier. An online data mining method is adopted to mine some reliable positive examples from the observation history in the learning to replace unreliable positive support vectors in the tracker. The tracker is rectified during the process of replacement and the drift problem is reduced. The tracked object is evaluated by the detector to determine whether the drift problem is so serious that the object is detected globally or not.
(5) The ground simulation platform for aerial refueling is built. A simplified measurement model based on monocular vision is proposed to measure the position of the drogue target in Cartesian space and the camera calibration method is given. The proposed visual detection, tracking and measurement method is verified on the ground simulation platform for aerial refueling. Position based visual control method is used to track the drogue target in Cartesian space and the autonomous docking process of aerial refueling is simulated.
|First Author Affilication||Institute of Automation, Chinese Academy of Sciences|
|尹英杰. 自主空中加油的目标视觉检测与跟踪策略研究[D]. 北京. 中国科学院大学,2016.|
|Files in This Item:|
|尹英杰博士论文_final.pdf（9463KB）||学位论文||暂不开放||CC BY-NC-SA||Application Full Text|
|Recommend this item|
|Export to Endnote|
|Similar articles in Google Scholar|
|Similar articles in Baidu academic|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.