CASIA OpenIR

浏览/检索结果: 共2条,第1-2条 帮助

限定条件                
已选(0)清除 条数/页:   排序方式:
Explanation Guided Knowledge Distillation for Pre-trained Language Model Compression 期刊论文
ACM Transactions on Asian and Low-Resource Language Information Processing, 2024, 卷号: 23, 期号: 2, 页码: 1-19
作者:  Zhao Yang;  Yuanzhe Zhang;  Dianbo Sui;  Yiming Ju;  Jun Zhao;  Kang Liu
Adobe PDF(1250Kb)  |  收藏  |  浏览/下载:57/21  |  提交时间:2024/05/30
Explanation  knowledge distillation  model compression  
Transformers in computational visual media: A survey 期刊论文
Computational Visual Media, 2021, 卷号: 8, 期号: 1, 页码: 33-62
作者:  Xu,Yifan;  Wei,Huapeng;  Lin,Minxuan;  Deng,Yingying;  Sheng,Kekai;  Zhang,Mengdan;  Tang,Fan;  Dong,Weiming;  Huang,Feiyue;  Xu,Changsheng
Adobe PDF(5366Kb)  |  收藏  |  浏览/下载:330/47  |  提交时间:2021/12/28
visual transformer  computational visual media (CVM)  high-level vision  low-level vision  image generation  multi-modal learning