Knowledge Commons of Institute of Automation,CAS
Knowledge Transfer from Pre-Trained Language Models to CIF-Based Speech Recognizers via Hierarchical Distillation | |
Minglun Han1,2; Feilong Chen1,3; Jing Shi1; Shuang Xu1; Bo Xu1,2,3 | |
2023-05 | |
会议名称 | The 24th INTERSPEECH Conference |
会议日期 | 2023-8-20 |
会议地点 | Dublin, Ireland |
摘要 | Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks. Leveraging the capabilities of PLMs to enhance automatic speech recognition (ASR) systems has also emerged as a promising research direction. However, previous works may be limited by the inflexible structures of PLMs and the insufficient utilization of PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer knowledge from PLMs to the ASR models, HKD employs cross-modal knowledge distillation with contrastive loss at the acoustic level and knowledge distillation with regression loss at the linguistic level. Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets, respectively. |
收录类别 | EI |
语种 | 英语 |
是否为代表性论文 | 是 |
七大方向——子方向分类 | 语音识别与合成 |
国重实验室规划方向分类 | 语音语言处理 |
是否有论文关联数据集需要存交 | 否 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/52064 |
专题 | 复杂系统认知与决策实验室_听觉模型与认知计算 |
通讯作者 | Jing Shi |
作者单位 | 1.Institute of Automation, Chinese Academy of Sciences 2.School of Artificial Intelligence, University of Chinese Academy of Sciences 3.School of Future Technology, University of Chinese Academy of Sciences |
第一作者单位 | 中国科学院自动化研究所 |
通讯作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Minglun Han,Feilong Chen,Jing Shi,et al. Knowledge Transfer from Pre-Trained Language Models to CIF-Based Speech Recognizers via Hierarchical Distillation[C],2023. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
interspeech2023-spee(563KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论