CASIA OpenIR  > 毕业生  > 硕士学位论文
大视野监视与凝视测量系统研究
魏青晨
2023-05
Pages84
Subtype硕士
Abstract

随着工业自动化和智能制造的发展,智能工厂对视觉引导测量装置的需求 日益增加。面向流水线上的目标筛选、分拣、转移等任务,需要处理目标在离线 情况下的不确定性,获取目标的身份和位置信息。现有的视觉装置在智能性和精 确性方面仍存在较大的提升空间,尤其是难以兼顾大视野与高精度这两项关键 指标。传统的立体视觉装置需要相机保持相对固定的空间关系,在增大视野的同 时像素密度也会降低,且多相机标定困难,系统复杂性会带来成本增加等问题。 另外,仿生眼作为常见的主动视觉装置,具备凝视、跟踪能力,但其凝视视野较 小,搜索效率较低。为此,本文设计了一种大视野监视与凝视测量装置,旨在实 现较大视野范围和较高测量精度的平衡。通过独特的设计和算法,使得该装置在 测量精度和“主动性”上的表现优于相同视野范围的其他现有装置,能够有效地 检测、跟踪并精确测量工业场景中的目标,为智能工厂提供更可靠、高效的视觉 引导方案。 论文的主要工作和贡献有: (1) 本文设计并搭建了一种主动立体视觉测量装置,由大视野监视相机和 主动凝视相机组成的,40 万像素相机配备 8° 视场角长焦镜头,安装在具有俯仰 和偏航两个自由度的云台上,作为主动凝视相机,用于精准观测。600 万像素相 机配备 150° 视场角广角镜头固定安装在云台基座上,作为大视野监视相机,用 于大范围监视。后者视场角远小于前者,因此虽然数码分辨率低,但光学分辨率 高得多。本系统采用两个成像差异较大的相机,平衡大视野与高精度间的天然矛 盾,在 𝑋,𝑌 ,𝑍 方向上实现最佳精度平衡。 (2) 本文提出了一种系统联合标定方法,与现有基于云台转轴过光心、成像 平面与转轴平行等附加几何约束进行建模的系统不同,的方法放弃了这些几何 假设,将位姿变换矩阵中的每个参数视为需要求解的变量。在本系统中,各环节 产生的误差最终会体现在两个相机观察到的外部标定板的位姿偏差上。从数学 角度来看,系统设计参数与实际参数之间的偏差可以表现为根据设计参数计算 出的控制点位置与观测值之间的差异。为了实现整体系统的高精度联合标定,本 文提出了一种基于误差驱动的参数优化方法,通过一个统一模型同时校准多种 误差。此方法降低了对机械设计与制造精度的要求,减少了系统模型假设,通过 数值优化方法,对位姿变换矩阵中的每个参数进行求解,实现多种误差的同时校 正。 (3) 本文设计了一种目标锁定跟踪算法,并在此基础上,提出了基于图像雅 克比的目标跟踪控制策略。针对本文系统模型的复杂性,根据图像雅可比方法能 够精确描述图像特征点的微小变化对相机运动参数的影响。本文通过推导出主 动凝视相机中目标图像坐标与图像中心点偏移量的微小变化与云台电机旋转之 间的关系,计算出图像雅可比矩阵,预测在当前相机位置和姿态下,微小的电机旋转角度变化将如何影响图像中的目标位置,以便对目标进行精确跟踪。然后, 通过使用 PID 控制,使得图像中的目标点始终保持在视野中心位置,从而有效 地解决了主动凝视相机的目标跟踪问题。

Other Abstract

With the development of industrial automation and intelligent manufacturing, the demand for vision-guided measurement devices in smart factories is increasing. For tasks such as target selection, sorting, and transfer on the assembly line, it is necessary to handle the uncertainty of targets in offline situations and obtain information about the identity and location of the target. Existing vision devices still have considerable room for improvement in terms of intelligence and accuracy, especially in achieving key indicators of a large field of view and high precision. Traditional stereo vision devices require cameras to maintain a relatively fixed spatial relationship, which can lead to reduced pixel density when increasing the field of view, difficulty in multi-camera calibration, and increased costs due to system complexity. Moreover, biomimetic eyes, as common active vision devices, have gaze and tracking capabilities, but their gaze field is small and their search efficiency is low. Therefore, this paper designs a largefield-of-view surveillance and gaze measurement device, aiming to achieve a balance between a large field of view and high measurement accuracy. Through unique design and algorithms, this device performs better in measurement accuracy and ”activeness” than other existing devices in the same field of view, and can effectively detect, track and accurately measure targets in industrial scenes, providing a more reliable and efficient vision guidance scheme for smart factories. The main work and contributions of this paper are: (1) Design and construction of a large-field-of-view surveillance and gaze measurement system. This paper designs and builds an active visual measurement device composed of a large-field-of-view surveillance camera and an active gaze camera. A 400,000-pixel camera equipped with an 8° field of view telephoto lens is installed on a gimbal with pitch and yaw degrees of freedom, serving as the active gaze camera for precise observation. A 6-million-pixel camera equipped with a 150° field of view wide-angle lens is fixedly mounted on the gimbal base, serving as the large-field-ofview surveillance camera for wide-range surveillance. The latter has a far smaller field of view than the former, so despite its lower digital resolution, it has a much higher optical resolution. This system uses two cameras with significantly different imaging characteristics to balance the natural contradiction between large field of view and high precision, achieving optimal accuracy balance in the 𝑋, 𝑌 , and 𝑍 directions. (2) A proposed method for joint calibration of system errors. Unlike existing systems that model based on additional geometric constraints such as the gimbal rotation axis passing through the camera’s optical center and the imaging plane being parallel to the rotation axis, our method abandons these geometric assumptions and regards each parameter in the pose transformation matrix as a variable to be solved. In this system, the errors produced at each link will ultimately be reflected in the pose deviation of the external calibration board observed by the two cameras. From a mathematical point of view, the deviation between system design parameters and actual parameters can be expressed as the difference between the control point positions calculated based on design parameters and the observations. To achieve high-precision joint calibration of the entire system, this paper proposes an error-driven parameter optimization method that calibrates multiple errors simultaneously through a unified model. This method reduces the requirements for mechanical design and manufacturing precision, reduces system model assumptions, and uses numerical optimization methods to solve each parameter in the pose transformation matrix, achieving simultaneous correction of multiple errors. (3) An image Jacobian-based target tracking control strategy. Given the complexity of the system model in this paper, the image Jacobian method can accurately describe the impact of small changes in image feature points on camera motion parameters. In this paper, we derive the relationship between the error of the target pixel coordinates and the camera’s optical center coordinates in the active gaze camera and the rotation angle of the gimbal motor, calculate the image Jacobian matrix, and predict how small changes in motor rotation angle at the current camera position and posture will affect the position of the target in the image, in order to accurately track the target. Then, by using PID control, the target point in the image is always kept at the center of the field of view, effectively solving the target tracking problem of the active gaze camera.

Keyword机器人视觉,主动感知,视觉系统联合标定,视觉测量,目标凝视与跟踪
Language中文
Sub direction classification机器人感知与决策
planning direction of the national heavy laboratory环境多维感知
Paper associated data
Document Type学位论文
Identifierhttp://ir.ia.ac.cn/handle/173211/52320
Collection毕业生_硕士学位论文
Recommended Citation
GB/T 7714
魏青晨. 大视野监视与凝视测量系统研究[D],2023.
Files in This Item:
File Name/Size DocType Version Access License
魏青晨-大视野监视与凝视测量系统研究.p(12303KB)学位论文 限制开放CC BY-NC-SA
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[魏青晨]'s Articles
Baidu academic
Similar articles in Baidu academic
[魏青晨]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[魏青晨]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.