CASIA OpenIR  > 学术期刊  > Machine Intelligence Research
Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
Min Ren1,2; Yun-Long Wang2; Zhao-Feng He3
发表期刊Machine Intelligence Research
ISSN2731-538X
2022
卷号19期号:3页码:209-226
摘要

Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and re[1]cognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks.

关键词Adversarial sample adversarial defense causal inference interpretable machine learning transformers
DOI10.1007/s11633-022-1330-7
语种英语
七大方向——子方向分类其他
国重实验室规划方向分类其他
是否有论文关联数据集需要存交
中文导读https://mp.weixin.qq.com/s/NngzImLUoGz2oMR15cSXsA
引用统计
被引频次:5[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/55942
专题学术期刊_Machine Intelligence Research
作者单位1.University of Chinese Academy of Sciences, Beijing 100190, China
2.Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
3.Laboratory of Visual Computing and Intelligent System, Beijing University of Posts and Telecommunications, Beijing 100876, China
第一作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Min Ren,Yun-Long Wang,Zhao-Feng He. Towards Interpretable Defense Against Adversarial Attacks via Causal Inference[J]. Machine Intelligence Research,2022,19(3):209-226.
APA Min Ren,Yun-Long Wang,&Zhao-Feng He.(2022).Towards Interpretable Defense Against Adversarial Attacks via Causal Inference.Machine Intelligence Research,19(3),209-226.
MLA Min Ren,et al."Towards Interpretable Defense Against Adversarial Attacks via Causal Inference".Machine Intelligence Research 19.3(2022):209-226.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
MIR-2022-02-052.pdf(5143KB)期刊论文出版稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Min Ren]的文章
[Yun-Long Wang]的文章
[Zhao-Feng He]的文章
百度学术
百度学术中相似的文章
[Min Ren]的文章
[Yun-Long Wang]的文章
[Zhao-Feng He]的文章
必应学术
必应学术中相似的文章
[Min Ren]的文章
[Yun-Long Wang]的文章
[Zhao-Feng He]的文章
相关权益政策
暂无数据
收藏/分享
文件名: MIR-2022-02-052.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。