CASIA OpenIR  > 毕业生  > 硕士学位论文
面向汽车主动安全的视觉关键技术研究与实现
其他题名Research and Realization on Key Techniques of Vehicle Active Safety base on Computer Vision
孙书娟
2015-05-21
学位类型工学硕士
中文摘要近年来汽车数量迅速增加,由车辆碰撞、车道偏离引发的交通事故也在迅速增多,伤亡人数逐年攀升。在此背景下,车辆主动安全技术引起了越来越多的研究机构以及汽车制造商的重视。基于计算机视觉的车辆检测、碰撞预警、车道线检测及车道偏离预警等技术是主动安全研究的重要方向。视觉传感器具有价格低、算法可移植性强、采集信息丰富等优点,在汽车主动安全中应用广泛。在此情况下,本文就移动平台下基于单目视觉的汽车主动安全技术做了相关研究,通过移动平台的摄像头采集环境信息并进行分析处理,实现车道线检测、目标车辆检测和目标跟踪等功能,本文完成的工作和取得的研究成果包括以下几个方面: 1、提出了一种基于直线假设模型,利用前一帧当前帧图像,通过Hough变换进行车道线检测的方法。该方法首先对当前帧图像进行预处理。基于前帧图像的车道线位置设定ROI,在当前帧图像ROI区域中,通过Hough变换提取直线集合,结合前一帧车道线位置做相似度计算,从而检测识别出当前帧车道线。 2、研究并实现了结合级联分类器和GentleAdaboost决策分类器的目标车辆检测方法。从样本图像子区域中提取若干不同类型特征,构成特征向量,经过级联Haar分类器和最终决策分类器得到判定结果。Haar级联分类器完成对目标候选区域的选取,解决了尺度变化问题,运行速度快,更鲁棒;决策分类器准确率高,对候选区域作进一步分类判别,最终结合车道线信息确定正前方的目标车辆。 3、提出一种基于高斯权重跟踪点的目标跟踪方法。对TLD跟踪算法中目标框跟踪点的筛选策略进行改进,改善了前后向跟踪方法中等权重容易导致的问题:随跟踪时间加长,错误累加,目标框持续增大。跟踪点具有高斯权重,目标框中心处跟踪点权重大,重采样过程中数量多,更易被信任而保留,因此基于高斯权重跟踪点的方法跟踪解雇更准确鲁棒。 4、搭建算法移植环境并完成Android版本发布。PC平台上该算法使用C++基于计算机视觉库OpenCV实现。在Android4.2平台上通过Java本地接口,使得运行在Java虚拟机上的程序可调用C++程序。采用NDK编译C++源码生成so动态库。Android层调用动态库完成算法对每帧图片的分析。
英文摘要In recent years, with the booming increase of vehicle numbers in traffic, more and more accidents caused by car collision or lane departure happen everyday, which also result in fast mounting of casualties and pose a great threat to the society. Hence, more and more attention is paid to the techniques of vehicle active safety by research institudes and carmakers. Vehicle detection, vehicle collision warning, lane detection and lane departure warning techniques based on computer vision are the most important research fields of vehicle active safety systems. The vision sensor is widely used in vehicle active safety systems due to the low price, portable algorithms and the richness of the collected information. This paper focuses on vehicle active safety techniques based on monocular vision. The system processes the on-road image sequences captured by the camera mounted on mobile platform, and is able to detect the target vehicle, measure the distances between vehicles, along with lane detection. The work and contributions of the paper are as follows. 1.The paper proposes a lane detection algorithm based on current frame and the last frame. Firstly, it extracts the linear edges of the current frame with a preprocessing of making edges thicker. Then it defines a ROI region based on the position of the lane of last frame, and performs Hough transform in the ROI. After obtaining the candidates by selecting the lines which get higher votes in the mapping from the image space to the parameter space, it calculates the posterior probability based on the lane position in the last frame, and finally recognizes the lane of the current frame. 2.Studying and implementing the target vehicle detection algorithm based on GentleAdaboost. It extracts a certain number of features from the subimages, constructing a feature vector of the subregion. Then it passes the vector to the Haar cascade detector and GentleAdaboost classifier to get the result. Among them, Haar cascade detector mainly accomplishes the selection of vehicle candidates in the search area and handles the issues caused by changing in scale. The GentleAdaboost classifier classify the vehicle candidates selected by the Haar detector further and identify the frontal target vehicle with the help of lane information. 3.The paper proposes an improved method to place feature points in object tracking algorithm. While the uniform placing strategy in TLD has a serious accumulative error problem, the method we propos...
关键词移动平台 车辆检测 车道线检测 目标跟踪 Gentleadaboost分类器 Mobile Platform Vehicle Detection Lane Detection Object Tracking Gentleadaboostclassifier
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/7782
专题毕业生_硕士学位论文
推荐引用方式
GB/T 7714
孙书娟. 面向汽车主动安全的视觉关键技术研究与实现[D]. 中国科学院自动化研究所. 中国科学院大学,2015.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CASIA_20122801462805(1462KB) 暂不开放CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[孙书娟]的文章
百度学术
百度学术中相似的文章
[孙书娟]的文章
必应学术
必应学术中相似的文章
[孙书娟]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。