CASIA OpenIR  > 模式识别国家重点实验室  > 生物识别与安全技术研究
Matching NIR Face to VIS Face Using Transduction
Zhu, Jun-Yong1,2; Zheng, Wei-Shi3,4; Lai, Jian-Huang2,3,4; Li, Stan Z.5,6
Source PublicationIEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
2014-03-01
Volume9Issue:3Pages:501-514
SubtypeArticle
AbstractVisual versus near infrared (VIS-NIR) face image matching uses an NIR face image as the probe and conventional VIS face images as enrollment. It takes advantage of the NIR face technology in tackling illumination changes and low-light condition and can cater for more applications where the enrollment is done using VIS face images such as ID card photos. Existing VIS-NIR techniques assume that during classifier learning, the VIS images of each target people have their NIR counterparts. However, since corresponding VIS-NIR image pairs of the same people are not always available, which is often the case, so those methods cannot be applied. To address this problem, we propose a transductive method named transductive heterogeneous face matching (THFM) to adapt the VIS-NIR matching learned from training with available image pairs to all people in the target set. In addition, we propose a simple feature representation for effective VIS-NIR matching, which can be computed in three steps, namely Log-DoG filtering, local encoding, and uniform feature normalization, to reduce heterogeneities between VIS and NIR images. The transduction approach can reduce the domain difference due to heterogeneous data and learn the discriminative model for target people simultaneously. To the best of our knowledge, it is the first attempt to formulate the VIS-NIR matching using transduction to address the generalization problem for matching. Experimental results validate the effectiveness of our proposed method on the heterogeneous face biometric databases.
KeywordHeterogeneous Face Recognition Vis-nir Face Matching Transductive Learning
WOS HeadingsScience & Technology ; Technology
WOS KeywordRECOGNITION ; IMAGES ; CLASSIFICATION ; EXTRACTION
Indexed BySCI
Language英语
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS IDWOS:000332459700015
Citation statistics
Cited Times:33[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/8037
Collection模式识别国家重点实验室_生物识别与安全技术研究
Affiliation1.Sun Yat Sen Univ, Sch Math & Computat Sci, Guangzhou 510275, Guangdong, Peoples R China
2.SYSU CMU Shunde Int Joint Res Inst, Shunde, Peoples R China
3.Sun Yat Sen Univ, Sch Informat Sci & Technol, Guangzhou 510275, Guangdong, Peoples R China
4.Guangdong Prov Key Lab Computat Sci, Guangzhou 510275, Guangdong, Peoples R China
5.Chinese Acad Sci, Inst Automat, Ctr Biometr & Secur Res, Beijing 100080, Peoples R China
6.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100080, Peoples R China
Recommended Citation
GB/T 7714
Zhu, Jun-Yong,Zheng, Wei-Shi,Lai, Jian-Huang,et al. Matching NIR Face to VIS Face Using Transduction[J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,2014,9(3):501-514.
APA Zhu, Jun-Yong,Zheng, Wei-Shi,Lai, Jian-Huang,&Li, Stan Z..(2014).Matching NIR Face to VIS Face Using Transduction.IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY,9(3),501-514.
MLA Zhu, Jun-Yong,et al."Matching NIR Face to VIS Face Using Transduction".IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 9.3(2014):501-514.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhu, Jun-Yong]'s Articles
[Zheng, Wei-Shi]'s Articles
[Lai, Jian-Huang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhu, Jun-Yong]'s Articles
[Zheng, Wei-Shi]'s Articles
[Lai, Jian-Huang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhu, Jun-Yong]'s Articles
[Zheng, Wei-Shi]'s Articles
[Lai, Jian-Huang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.