Binary thresholding defense against adversarial attacks
Yutong Wang1,2; Wenwen Zhang1,3; Tianyu Shen1,2; Hui Yu4; Fei-Yue Wang1
发表期刊Neurocomputing
ISSN0925-2312
2021
卷号445期号:445页码:61-71
通讯作者Wang, Fei-Yue(feiyue.wang@ia.ac.cn)
摘要

Convolutional neural networks are always vulnerable to adversarial attacks. In recent research, Projected Gradient Descent (PGD) has been recognized as the most effective attack method, and adversarial training on adversarial examples generated by PGD attack is the most reliable defense method. However, adversarial training requires a large amount of computation time. In this paper, we propose a fast, simple and strong defense method that achieves the best speed-accuracy trade-off. We first compare the feature maps of naturally trained model with adversarially trained model in same architecture, then we find the key of adversarially trained model lies on the binary thresholding the convolutional layers perform. Inspired by this, we perform binary thresholding to preprocess the input image and defend against PGD attack. On MNIST, our defense achieves 99.0% accuracy on clean images and 91.2% on white-box adversarial images. This performance is slightly better than adversarial training, and our method largely saves the computation time for retraining. On Fashion-MNIST and CIFAR-10, we train a new model on binarized images and use this model to defend against attack. Though its performance is not as good as adversarial training, it gains the best speed-accuracy trade-off.

关键词Binary thresholding Defense Adversarial training Adversarial attack
DOI10.1016/j.neucom.2021.03.036
收录类别SCI
语种英语
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000652811800006
出版者ELSEVIER
七大方向——子方向分类机器博弈
引用统计
被引频次:9[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/44700
专题多模态人工智能系统全国重点实验室_平行智能技术与系统团队
通讯作者Fei-Yue Wang
作者单位1.The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.School of Software Engineering, Xi'an Jiaotong University
4.School of Creative Technologies, University of Portsmouth
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Yutong Wang,Wenwen Zhang,Tianyu Shen,et al. Binary thresholding defense against adversarial attacks[J]. Neurocomputing,2021,445(445):61-71.
APA Yutong Wang,Wenwen Zhang,Tianyu Shen,Hui Yu,&Fei-Yue Wang.(2021).Binary thresholding defense against adversarial attacks.Neurocomputing,445(445),61-71.
MLA Yutong Wang,et al."Binary thresholding defense against adversarial attacks".Neurocomputing 445.445(2021):61-71.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
1-s2.0-S092523122100(1771KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Yutong Wang]的文章
[Wenwen Zhang]的文章
[Tianyu Shen]的文章
百度学术
百度学术中相似的文章
[Yutong Wang]的文章
[Wenwen Zhang]的文章
[Tianyu Shen]的文章
必应学术
必应学术中相似的文章
[Yutong Wang]的文章
[Wenwen Zhang]的文章
[Tianyu Shen]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 1-s2.0-S0925231221004045-main.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。