Knowledge Commons of Institute of Automation,CAS
Distill and Replay for Continual Language Learning | |
Sun, Jingyuan1,2; Wang, Shaonan1,2; Zhang, Jiajun1,2,4; Zong, Chengqing1,2,3 | |
2020-12 | |
会议名称 | International Conference on Computational Linguistics |
会议日期 | 2020-12-8 |
会议地点 | Barcelona, Spain (Online) |
出版者 | ACL |
摘要 | Accumulating knowledge to tackle new tasks without necessarily forgetting the old ones is a hallmark of human-like intelligence. But the current dominant paradigm of machine learning is still to train a model that works well on static datasets. When learning tasks in a stream where data distribution may fluctuate, fitting on new tasks often leads to forgetting on the previous ones. We propose a simple yet effective framework that continually learns natural language understanding tasks with one model. Our framework distills knowledge and replays experience from previous tasks when fitting on a new task, thus named DnR (distill and replay). The framework is based on language models and can be smoothly built with different language model architectures. Experimental results demonstrate that DnR outperfoms previous state-of-the-art models in continually learning tasks of the same type but from different domains, as well as tasks of radically different types. With the distillation method, we further show that it’s possible for DnR to incrementally compress the model size while still outperforming most of the baselines. We hope that DnR could promote the empirical application of continual language learning, and contribute to building human-level language intelligence minimally bothered by catastrophic forgetting. |
语种 | 英语 |
七大方向——子方向分类 | 自然语言处理 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/45002 |
专题 | 多模态人工智能系统全国重点实验室_自然语言处理 |
作者单位 | 1.National Laboratory of Pattern Recognition, CASIA, Beijing, China 2.University of Chinese Academy of Sciences, Beijing, China 3.CAS Center for Excellence in Brain Science and Intelligence Technology, China 4.Beijing Academy of Artificial Intelligence, Beijing, China |
第一作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Sun, Jingyuan,Wang, Shaonan,Zhang, Jiajun,et al. Distill and Replay for Continual Language Learning[C]:ACL,2020. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
2020.coling-main.318(769KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论