Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models | |
Yang, Songlin1,2![]() ![]() ![]() | |
2023-06 | |
会议名称 | IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop |
会议日期 | 2023-6-19 |
会议地点 | Vancouver, Canada |
摘要 | Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos and replayed videos) from live ones. However, adversarial examples greatly challenge its credibility, where adding some perturbation noise can easily change the output of the target model. Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance without any fine-grained analysis that which model architecture or auxiliary feature is vulnerable. To handle this problem, we propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models, which consists of a multitask module and a semantic feature augmentation (SFA) module. The multitask module can obtain different semantic features for further fine-grained evaluation, but only attacking these semantic features fails to reflect the vulnerability which is related to the discrimination between spoofing and live images. We then design the SFA module to introduce the data distribution prior for more discrimination-related gradient directions for generating adversarial examples. And the discrimination-related improvement is quantitatively reflected by the increase of attack success rate, where comprehensive experiments show that SFA module increases the attack success rate by nearly 40$\%$ on average. We conduct fine-grained adversarial analysis on different annotations, geometric maps, and backbone networks (e.g., Resnet network). These fine-grained adversarial examples can be used for selecting robust backbone networks and auxiliary features. They also can be used for adversarial training, which makes it practical to further improve the accuracy and robustness of the face anti-spoofing models. |
七大方向——子方向分类 | 图像视频处理与分析 |
国重实验室规划方向分类 | 视觉信息处理 |
是否有论文关联数据集需要存交 | 否 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/57549 |
专题 | 模式识别实验室 |
通讯作者 | Wang, Wei |
作者单位 | 1.School of Artificial Intelligence, University of Chinese Academy of Sciences 2.Center for Research on Intelligent Perception and Computing, CASIA 3.SenseTime Research |
第一作者单位 | 模式识别国家重点实验室 |
通讯作者单位 | 模式识别国家重点实验室 |
推荐引用方式 GB/T 7714 | Yang, Songlin,Wang, Wei,Xu, Chenye,et al. Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models[C],2023. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
CVPRW_2023_Face_Anti(1766KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论