Knowledge Commons of Institute of Automation,CAS
Explanation Guided Knowledge Distillation for Pre-trained Language Model Compression | |
Zhao Yang1,2; Yuanzhe Zhang1,2; Dianbo Sui3; Yiming Ju1,2; Jun Zhao1,2; Kang Liu1,2 | |
发表期刊 | ACM Transactions on Asian and Low-Resource Language Information Processing |
ISSN | 2375-4699 |
2024 | |
卷号 | 23期号:2页码:1-19 |
通讯作者 | Zhang, Yuanzhe(yuanzhe.zhang@nlpr.ia.ac.cn) ; Liu, Kang(kliu@nlpr.ia.ac.cn) |
摘要 | Knowledge distillation is widely used in pre-trained language model compression, which can transfer knowledge from a cumbersome model to a lightweight one. Though knowledge distillation based model compression has achieved promising performance, we observe that explanations between the teacher model and the student model are not consistent. We argue that the student model should study not only the predictions of the teacher model but also the internal reasoning process. To this end, we propose Explanation Guided Knowledge Distillation (EGKD) in this article, which utilizes explanations to represent the thinking process and improve knowledge distillation. To obtain explanations in our distillation framework, we select three typical explanation methods rooted in different mechanisms, namely gradient-based, perturbation-based, and feature selection methods. Then, to improve computational efficiency, we propose different optimization strategies to utilize the explanations obtained by these three different explanation methods, which could provide the student model with better learning guidance. Experimental results on GLUE demonstrate that leveraging explanations can improve the performance of the student model. Moreover, our EGKD could also be applied to model compression with different architectures. |
关键词 | Explanation knowledge distillation model compression |
DOI | https://doi.org/10.1145/3639364 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key R&D Program of China[2022YFF0711900] ; National Natural Science Foundation of China[61831022] ; National Natural Science Foundation of China[62276264] ; National Natural Science Foundation of China[62306087] ; Yunnan Provincial Major Science and Technology Special Plan Projects[202202AD080004] ; Youth Innovation Promotion Association CAS ; Natural Science Foundation of Shandong Province[ZR2023QF154] |
项目资助者 | National Key R&D Program of China ; National Natural Science Foundation of China ; Yunnan Provincial Major Science and Technology Special Plan Projects ; Youth Innovation Promotion Association CAS ; Natural Science Foundation of Shandong Province |
WOS研究方向 | Computer Science |
WOS类目 | Computer Science, Artificial Intelligence |
WOS记录号 | WOS:001193524700014 |
出版者 | ASSOC COMPUTING MACHINERY |
七大方向——子方向分类 | 自然语言处理 |
国重实验室规划方向分类 | 语音语言处理 |
是否有论文关联数据集需要存交 | 否 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/56723 |
专题 | 复杂系统认知与决策实验室 |
通讯作者 | Kang Liu |
作者单位 | 1.School of Artificial Intelligence, University of Chinese Academy of Sciences, China 2.The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, China 3.Harbin Institute of Technology, Weihai, China |
第一作者单位 | 中国科学院自动化研究所 |
通讯作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Zhao Yang,Yuanzhe Zhang,Dianbo Sui,et al. Explanation Guided Knowledge Distillation for Pre-trained Language Model Compression[J]. ACM Transactions on Asian and Low-Resource Language Information Processing,2024,23(2):1-19. |
APA | Zhao Yang,Yuanzhe Zhang,Dianbo Sui,Yiming Ju,Jun Zhao,&Kang Liu.(2024).Explanation Guided Knowledge Distillation for Pre-trained Language Model Compression.ACM Transactions on Asian and Low-Resource Language Information Processing,23(2),1-19. |
MLA | Zhao Yang,et al."Explanation Guided Knowledge Distillation for Pre-trained Language Model Compression".ACM Transactions on Asian and Low-Resource Language Information Processing 23.2(2024):1-19. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
杨朝-TALLIP.pdf(1250KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论