CASIA OpenIR

浏览/检索结果: 共5条,第1-5条 帮助

限定条件        
已选(0)清除 条数/页:   排序方式:
Interpretability of Neural Networks Based on Game-theoretic Interactions 期刊论文
Machine Intelligence Research, 2024, 卷号: 21, 期号: 4, 页码: 718-739
作者:  Huilin Zhou;   Jie Ren;   Huiqi Deng;   Xu Cheng;  Jinpeng Zhang;   Quanshi Zhang
Adobe PDF(2984Kb)  |  收藏  |  浏览/下载:12/3  |  提交时间:2024/07/18
Model interpretability and transparency  explainable AI  game theory  interaction  deep learning  
Deep Industrial Image Anomaly Detection: A Survey 期刊论文
Machine Intelligence Research, 2024, 卷号: 21, 期号: 1, 页码: 104-135
作者:  Jiaqi Liu;  Guoyang Xie;  Jinbao Wang;  Shangnian Li;  Chengjie Wang;  Feng Zheng;  Yaochu Jin
Adobe PDF(3376Kb)  |  收藏  |  浏览/下载:58/10  |  提交时间:2024/04/23
Image anomaly detection, defect detection, industrial manufacturing, deep learning, computer vision  
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-level Backdoor Attacks 期刊论文
Machine Intelligence Research, 2023, 卷号: 20, 期号: 2, 页码: 180-193
作者:  Zhengyan Zhang;  Guangxuan Xiao;  Yongwei Li;  Tian Lv;  Fanchao Qi;  Zhiyuan Liu;  Yasheng Wang;  Xin Jiang;  Maosong Sun
Adobe PDF(1649Kb)  |  收藏  |  浏览/下载:54/16  |  提交时间:2024/04/23
Pre-trained language models  backdoor attacks  transformers  natural language processing (NLP)  computer vision (CV)  
Pre-training in Medical Data: A Survey 期刊论文
Machine Intelligence Research, 2023, 卷号: 20, 期号: 2, 页码: 147-149
作者:  Yixuan Qiu;  Feng Lin;  Weitong Chen;  Miao Xu
Adobe PDF(2262Kb)  |  收藏  |  浏览/下载:54/16  |  提交时间:2024/04/23
Medical data  pre-training  transfer learning  self-supervised learning  medical image data  electrocardiograms (ECG) data  
Denoised Internal Models: A Brain-inspired Autoencoder Against Adversarial Attacks 期刊论文
Machine Intelligence Research, 2022, 卷号: 19, 期号: 5, 页码: 456-471
作者:  Kai-Yuan Liu;  Xing-Yu Li;  Yu-Rui Lai;  Hang Su;  Jia-Chen Wang;  Chun-Xu Guo;  Hong Xie;  Ji-Song Guan;  Yi Zhou
Adobe PDF(3203Kb)  |  收藏  |  浏览/下载:49/9  |  提交时间:2024/04/23
Brain-inspired learning  autoencoder  robustness  adversarial attack  generative model