CASIA OpenIR  > 模式识别实验室
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models
Yang, Songlin1,2; Wang, Wei2; Xu, Chenye3; He, Ziwen1,2; Peng, Bo2; Dong, Jing2
2023-06
会议名称IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop
会议日期2023-6-19
会议地点Vancouver, Canada
摘要

Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos and replayed videos) from live ones. However, adversarial examples greatly challenge its credibility, where adding some perturbation noise can easily change the output of the target model. Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance without any fine-grained analysis that which model architecture or auxiliary feature is vulnerable. To handle this problem, we propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models, which consists of a multitask module and a semantic feature augmentation (SFA) module. The multitask module can obtain different semantic features for further fine-grained evaluation, but only attacking these semantic features fails to reflect the vulnerability which is related to the discrimination between spoofing and live images. We then design the SFA module to introduce the data distribution prior for more discrimination-related gradient directions for generating adversarial examples. And the discrimination-related improvement is quantitatively reflected by the increase of attack success rate, where comprehensive experiments show that SFA module increases the attack success rate by nearly 40$\%$ on average. We conduct fine-grained adversarial analysis on different annotations, geometric maps, and backbone networks (e.g., Resnet network). These fine-grained adversarial examples can be used for selecting robust backbone networks and auxiliary features. They also can be used for adversarial training, which makes it practical to further improve the accuracy and robustness of the face anti-spoofing models.

七大方向——子方向分类图像视频处理与分析
国重实验室规划方向分类视觉信息处理
是否有论文关联数据集需要存交
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/57549
专题模式识别实验室
通讯作者Wang, Wei
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Center for Research on Intelligent Perception and Computing, CASIA
3.SenseTime Research
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Yang, Songlin,Wang, Wei,Xu, Chenye,et al. Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models[C],2023.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CVPRW_2023_Face_Anti(1766KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yang, Songlin]的文章
[Wang, Wei]的文章
[Xu, Chenye]的文章
百度学术
百度学术中相似的文章
[Yang, Songlin]的文章
[Wang, Wei]的文章
[Xu, Chenye]的文章
必应学术
必应学术中相似的文章
[Yang, Songlin]的文章
[Wang, Wei]的文章
[Xu, Chenye]的文章
相关权益政策
暂无数据
收藏/分享
文件名: CVPRW_2023_Face_Anti_Spoofing__camera_ready_.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。