CASIA OpenIR  > 模式识别实验室
Combining 2D texture and 3D geometry features for Reliable iris presentation attack detection using light field focal stack
Luo, Zhengquan1; Wang, Yunlong2; Liu, Nianfeng2; Wang, Zilei1
发表期刊IET BIOMETRICS
ISSN2047-4938
2022-08-27
页码10
摘要

Iris presentation attack detection (PAD) is still an unsolved problem mainly due to the various spoof attack strategies and poor generalisation on unseen attackers. In this paper, the merits of both light field (LF) imaging and deep learning (DL) are leveraged to combine 2D texture and 3D geometry features for iris liveness detection. By exploring off-the-shelf deep features of planar-oriented and sequence-oriented deep neural networks (DNNs) on the rendered focal stack, the proposed framework excavates the differences in 3D geometric structure and 2D spatial texture between bona fide and spoofing irises captured by LF cameras. A group of pre-trained DL models are adopted as feature extractor and the parameters of SVM classifiers are optimised on a limited number of samples. Moreover, two-branch feature fusion further strengthens the framework's robustness and reliability against severe motion blur, noise, and other degradation factors. The results of comparative experiments indicate that variants of the proposed framework significantly surpass the PAD methods that take 2D planar images or LF focal stack as input, even recent state-of-the-art (SOTA) methods fined-tuned on the adopted database. Presentation attacks, including printed papers, printed photos, and electronic displays, can be accurately detected without fine-tuning a bulky CNN. In addition, ablation studies validate the effectiveness of fusing geometric structure and spatial texture features. The results of multi-class attack detection experiments also verify the good generalisation ability of the proposed framework on unseen presentation attacks.

DOI10.1049/bme2.12092
收录类别SCI
语种英语
资助项目CAAI Huawei MindSpore Open Fund[CAAIXSJLJJ-2021-053A] ; National Natural Science Foundation of China[61906199] ; National Natural Science Foundation of China[62006225] ; National Natural Science Foundation of China[62176025] ; Strategic Priority Research Program of Chinese Academy of Sciences[XDA27040700]
项目资助者CAAI Huawei MindSpore Open Fund ; National Natural Science Foundation of China ; Strategic Priority Research Program of Chinese Academy of Sciences
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000846483600001
出版者WILEY
是否为代表性论文
七大方向——子方向分类生物特征识别
国重实验室规划方向分类视觉信息处理
是否有论文关联数据集需要存交
引用统计
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/50063
专题模式识别实验室
通讯作者Wang, Yunlong
作者单位1.Univ Sci & Technol China, Dept Automat, Hefei, Anhui, Peoples R China
2.Chinese Acad Sci CASIA, Ctr Res Intelligent Percept & Comp CRIPAC, Natl Lab Pattem Recognit NLPR, Inst Automat, 95 Zhongguancun East St, Beijing 100190, Peoples R China
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Luo, Zhengquan,Wang, Yunlong,Liu, Nianfeng,et al. Combining 2D texture and 3D geometry features for Reliable iris presentation attack detection using light field focal stack[J]. IET BIOMETRICS,2022:10.
APA Luo, Zhengquan,Wang, Yunlong,Liu, Nianfeng,&Wang, Zilei.(2022).Combining 2D texture and 3D geometry features for Reliable iris presentation attack detection using light field focal stack.IET BIOMETRICS,10.
MLA Luo, Zhengquan,et al."Combining 2D texture and 3D geometry features for Reliable iris presentation attack detection using light field focal stack".IET BIOMETRICS (2022):10.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
2022-【IET Biometrics(1236KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Luo, Zhengquan]的文章
[Wang, Yunlong]的文章
[Liu, Nianfeng]的文章
百度学术
百度学术中相似的文章
[Luo, Zhengquan]的文章
[Wang, Yunlong]的文章
[Liu, Nianfeng]的文章
必应学术
必应学术中相似的文章
[Luo, Zhengquan]的文章
[Wang, Yunlong]的文章
[Liu, Nianfeng]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 2022-【IET Biometrics】- Zhengquan - Combining 2D texture and 3D geometry features for Reliable iris presentation attack detection.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。