Institutional Repository of Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
D2C: Deep cumulatively and comparatively learning for human age estimation | |
Li, Kai1![]() ![]() ![]() | |
Source Publication | PATTERN RECOGNITION
![]() |
2017-06-01 | |
Volume | 66Issue:0Pages:95-105 |
Subtype | Article |
Abstract | Age estimation from face images is an important yet difficult task in computer vision. Its main difficulty lies in how to design aging features that remain discriminative in spite of large facial appearance variations. Meanwhile, due to the difficulty of collecting and labeling datasets that contain sufficient samples for all possible ages, the age distributions of most benchmark datasets are often imbalanced, which makes this problem more challenge. In this work, we try to solve these difficulties by means of the mainstream deep learning techniques. Specifically, we use a convolutional neural network which can learn discriminative aging features from raw face images without any handcrafting. To combat the sample imbalance problem, we propose a novel cumulative hidden layer which is supervised by a point-wise cumulative signal. With this cumulative hidden layer, our model is learnt indirectly using faces with neighbouring ages and thus alleviate the sample imbalance problem. In order to learn more effective aging features, we further propose a comparative ranking layer which is supervised by a pair-wise comparative signal. This comparative ranking layer facilitates aging feature learning and improves the performance of the main age estimation task. In addition, since one face can be included in many different training pairs, we can make full use of the limited training data. It is noted that both of these two novel layers are differentiable, so our model is end-to-end trainable. Extensive experiments on the two of the largest benchmark datasets show that our deep age estimation model gains notable advantage on accuracy when compared against existing methods. |
Keyword | Age Estimation Deep Learning Convolutional Neural Network |
WOS Headings | Science & Technology ; Technology |
DOI | 10.1016/j.patcog.2017.01.007 |
WOS Keyword | FACE IMAGES ; DIMENSIONALITY ; REGRESSION |
Indexed By | SCI |
Language | 英语 |
Funding Organization | 973 basic research program of China(2014CB349303) ; Natural Science Foundation of China(61472421 ; CAS(XDB02070003) ; 61672519 ; U1636218 ; 61303178) |
WOS Research Area | Computer Science ; Engineering |
WOS Subject | Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic |
WOS ID | WOS:000397371800011 |
Citation statistics | |
Document Type | 期刊论文 |
Identifier | http://ir.ia.ac.cn/handle/173211/15080 |
Collection | 模式识别国家重点实验室_视频内容安全 |
Affiliation | 1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, PR China 2.CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Beijing 100190, PR China 3.Department of Computer Science and Information Systems, Birkbeck College, London WC1E 7HX, United Kingdom |
First Author Affilication | Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China |
Recommended Citation GB/T 7714 | Li, Kai,Xing, Junliang,Hu, Weiming,et al. D2C: Deep cumulatively and comparatively learning for human age estimation[J]. PATTERN RECOGNITION,2017,66(0):95-105. |
APA | Li, Kai,Xing, Junliang,Hu, Weiming,&Maybank, Stephen J..(2017).D2C: Deep cumulatively and comparatively learning for human age estimation.PATTERN RECOGNITION,66(0),95-105. |
MLA | Li, Kai,et al."D2C: Deep cumulatively and comparatively learning for human age estimation".PATTERN RECOGNITION 66.0(2017):95-105. |
Files in This Item: | Download All | |||||
File Name/Size | DocType | Version | Access | License | ||
PR17D2C.pdf(1176KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | View Download |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment