CASIA OpenIR  > 毕业生  > 硕士学位论文
基于动态序列图像的人脸识别
Alternative TitleVideo-Based Face Recognition
刘亮
Subtype工学硕士
Thesis Advisor王蕴红 ; 谭铁牛
2007-06-04
Degree Grantor中国科学院研究生院
Place of Conferral中国科学院自动化研究所
Degree Discipline模式识别与智能系统
Keyword人脸识别 主元分析 主元空间合并 在线学习 Face Recognition Principal Component Analysis Eigenspace Merging Online Learning
Abstract基于动态序列图像的人脸识别,是一个近几年刚刚兴起的研究方向。本文中主要讨论的是从序列图像到序列图像的人脸识别,即训练数据和测试数据都是连续的人脸图像序列。主要工作如下: 1.在人脸表观模型在线学习方面,提出了两种新的在线学习方法,即K-主元空间学习和T-主元空间学习。 对于K-主元空间学习,每个人对应的主元空间的个数是固定的,在线学习的过程中,通过增量主元分析、主元空间合并、主元空间分裂来动态调整各个主元空间,使用不同主元空间之间的转移次数来利用序列中的动态信息。 对于T-主元空间学习,每个人对应的主元空间的个数是不固定的,使用增量主元分析来调整主元空间。在线学习的过程中,可以保证各个主元空间中包含的样本数目单调递增,除了一个最小的数目。 这两种方法都是由训练集中每个人的人脸图像序列学习出若干主元空间来近似表达人脸表观流形,并且都不需要预先离线训练一个模型,因此实现了完全的在线学习。实验表明,两种方法都可以获得较好的识别效果。 2.本文中提出了一种快速PCA(主元分析)算法,在对大规模维数据进行主元分析时,快速PCA算法与传统PCA算法相比在时间复杂度和空间复杂度方面有显著的优势。一个大数据集首先被划分成若干小数据集,然后将传统PCA算法应用于各个小数据集,得到一系列主元空间,其中每个主元空间对应一个小数据集。最后,将这些主元空间按最矮二叉树形式两两合并,最终得到的主元空间包含了原始数据集的主元分析结果。文中详细分析了这种快速PCA算法可能引入误差的上限。 本文中将这个算法应用于序列图像人脸识别中关键帧的选取。首先使用快速PCA算法计算训练样本集的主元空间,然后计算测试序列中待识别图像到这个主元空间的距离,以判断该图像是否为关键帧。将非关键帧在识别前舍弃,以提高整个序列的识别精度。实验表明,这种方法可以取得较好的识别效果,同时与传统PCA算法相比,大幅度减少了计算人脸主元空间所需的时间。
Other AbstractVideo-based face recognition is an interesting problem in recent years. In this thesis, we mainly discuss face recognition from video to video. In other words, both the training data set and the testing data set are consecutive face image sequences. The contributions of this thesis mainly include the following two aspects. 1.We do some work on online learning of face appearance models. We propose two novel online learning methods, namely K-Eigenspace Learning and T-Eigenspace Learning. For K-Eigenspace Learning, the number of eigenspace models corresponding to each person is fixed. The eigenspace models are dynamically adjusted using IPCA (Incremental PCA), eigenspace merging or eigenspace splitting. We exploit the temporal information which is embedded in a face image sequence by maintaining a transition matrix. For T-Eigenspace Learning, the number of eigenspace models corresponding to each person is variable. The eigenspace models are dynamically adjusted using IPCA. In the process of online learning, more and more samples are adding to each eigenspace except one eigenspace which contains the least number of samples. Both methods can learn appearance models completely online. They both try to learn a few eigenspace models to approximately construct the face appearance manifold for each person in the training data set. Experimental results show that both methods can achieve good recognition performance. 2. We propose a fast computing algorithm for PCA (Principal Component Analysis) dealing with large scale high-dimensional data sets. A large data set is firstly divided into several small data sets. Then traditional PCA method is applied on each small data set and then several eigenspace models are obtained. At last, these eigenspace models are merged pairwise into one eigenspace model which contains the PCA result of the original data set. We also analyze the behavior of our approach with respect to the error introduced in the computing process. We apply this algorithm on good frame selection for video-based face recognition. The recognition performance is rather good, while the computing time is much less than that of traditional PCA method.
shelfnumXWLW1145
Other Identifier200428014628042
Language中文
Document Type学位论文
Identifierhttp://ir.ia.ac.cn/handle/173211/7420
Collection毕业生_硕士学位论文
Recommended Citation
GB/T 7714
刘亮. 基于动态序列图像的人脸识别[D]. 中国科学院自动化研究所. 中国科学院研究生院,2007.
Files in This Item:
File Name/Size DocType Version Access License
CASIA_20042801462804(3801KB) 暂不开放CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[刘亮]'s Articles
Baidu academic
Similar articles in Baidu academic
[刘亮]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[刘亮]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.