CASIA OpenIR  > 毕业生  > 博士学位论文
基于视频的头部区域分割与跟踪相关问题研究
其他题名Study of head segmentation and tracking in videos
张记霞
2013-05-27
学位类型工学博士
中文摘要在计算机视觉和数字媒体领域中,头部分割和姿态跟踪具有重要的研究意义和应用价值。由于实际场景的复杂性,背景和头部区域均呈现出多种多样的视觉特征。研制鲁棒、稳定并对场景自适应的头部分割和跟踪算法一直是视觉计算中的一个重要研究课题。本文从提高算法判别能力和泛化能力的角度出发,对头部跟踪和分割中的相关问题进行了系统研究。论文的主要贡献包括: 1、提出了一种判别式的颜色先验模型,并将其应用于基于特征点匹配的三维头部跟踪中。该方法中基于统计颜色信息描述特征点,并通过随机投影树实现对特征点的分类从而识别出背景中的外点。通过去除外点减少特征点匹配过程中可能的错误对应关系,继而减少头部姿态估计所需的迭代次数和计算时间。实验结果表明该模型具有较强的判别能力,且结合该模型之后可以提高头部跟踪算法的准确性和鲁棒性。 2、提出了一种基于线性回归树的肤色检测算法。该方法的核心是利用树状分类器对颜色空间进行划分,并在划分后的子空间内定义关于肤色和非肤色的分类决策函数,从而将肤色检测问题划分为多个简单子问题来解决,以处理复杂场景下肤色呈现多模式分布的情形。线性回归树的优点是训练时间短,测试速度快,并且不需要大训练数据集。在公开的肤色数据库上的实验结果表明所提算法具有较强的泛化能力和判别能力;同时在视频数据中的肤色检测实验进一步验证了所提算法对场景变化的鲁棒性。 3、提出了一种基于半监督分类的自动头部分割方法,在基于局部样条回归的图正则化半监督分类框架下,可以实现监督信息的自动获取和自动头部分割。此外,基于人脸区域内的肤色一致性、深度连续性和帧间区域连续性,该方法自动提取初始的头部区域,并利用此区域来判定监督信息的可靠性。实验结果表明所提自动头部分割方法对头部姿态变化、尺度变化和光照变化鲁棒,且可以得到与人工交互方法相近的分割结果。 4、提出了一种融合的头部分割和跟踪方法。其核心思想是基于帧间匹配特征点需满足分割结果一致性的条件将两者融合在同一框架中共同解决。一方面,以匹配特征点在上一帧的分割结果作为当前帧的标注信息,采用基于图拉普拉斯正则化抠图方法来实现帧间分割的连续性。另一方面,结合头部分割结果和肤色检测结果对匹配特征点进行评估加权,并基于加权的匹配对应关系估计头部姿态。实验结果表明该方法可以得到准确的头部区域并能正确估计出头部姿态。
英文摘要Head segmentation and tracking play an important role in computer vision and digital media. Due to the complexity in real scenes, face region and background undergo high diversity of visual appearances. Thus, it has been an important task in computer vision to design robust, stable and adaptive head segmentation and tracking algorithms. From the aspect of improving the generalization ability and discrimination ability, this thesis focuses on the relevant problems in head segmentation and tracking. The main contributions of the thesis are listed as following: 1、We propose a discriminative color prior model, and apply it in feature matching based 3D head tracking. Based on the property that the face region usually owns different colors with background, the model classifies feature points into two kinds and identifies the outlier points in background. Rejecting the outliers, the wrong correspondences in feature point matching will be reduced, hence the iteration times and computation time in pose estimation will be decreased. Experimental results show that it owns sufficient discriminative ability and it could improve the head tracking algorithm to be more accurate and robust. 2、We propose a skin detection algorithm based on linear regression tree. Its central idea is to utilize the tree classifier to partition the color space into several sub-regions and to define a decision function between skin and non-skin in each sub-region, which is suitable for the multi-mode property of skin colors. The tree is fast to train, fast to test and it does not call for a large training set. Experiment results on public skin database shows that it owns both good generalization ability and discriminative ability, while the tests in real videos further imply its efficiency. 3、An automatic head segmentation method is proposed based on semi-supervised classification. It is modeled in the semi-supervised classification framework with local spline regression based graph regularization. With the automatic supply of supervised information, it works without human interactions. To enable the correctness of supervised information, an initial head region is estimated and utilized to constraint them. Experimental results show that it could obtain comparative results with interactive methods and it is robust to head pose variations, scale changes and different lighting conditions. 4、We propose an integrated head segmentation and tracking method. It combines head segmentation and ...
关键词头部跟踪 头部分割 肤色检测 特征点匹配 姿态估计 Head Tracking Head Segmentation Skin Detection Keypoint Matching Pose Estimation
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/6520
专题毕业生_博士学位论文
推荐引用方式
GB/T 7714
张记霞. 基于视频的头部区域分割与跟踪相关问题研究[D]. 中国科学院自动化研究所. 中国科学院大学,2013.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CASIA_20091801462806(11408KB) 暂不开放CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[张记霞]的文章
百度学术
百度学术中相似的文章
[张记霞]的文章
必应学术
必应学术中相似的文章
[张记霞]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。