Knowledge Transfer from Pre-Trained Language Models to CIF-Based Speech Recognizers via Hierarchical Distillation
Minglun Han1,2; Feilong Chen1,3; Jing Shi1; Shuang Xu1; Bo Xu1,2,3
2023-05
会议名称The 24th INTERSPEECH Conference
会议日期2023-8-20
会议地点Dublin, Ireland
摘要

Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks. Leveraging the capabilities of PLMs to enhance automatic speech recognition (ASR) systems has also emerged as a promising research direction. However, previous works may be limited by the inflexible structures of PLMs and the insufficient utilization of PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer knowledge from PLMs to the ASR models, HKD employs cross-modal knowledge distillation with contrastive loss at the acoustic level and knowledge distillation with regression loss at the linguistic level. Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets, respectively.

收录类别EI
语种英语
是否为代表性论文
七大方向——子方向分类语音识别与合成
国重实验室规划方向分类语音语言处理
是否有论文关联数据集需要存交
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/52064
专题复杂系统认知与决策实验室_听觉模型与认知计算
通讯作者Jing Shi
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.School of Future Technology, University of Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Minglun Han,Feilong Chen,Jing Shi,et al. Knowledge Transfer from Pre-Trained Language Models to CIF-Based Speech Recognizers via Hierarchical Distillation[C],2023.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
interspeech2023-spee(563KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Minglun Han]的文章
[Feilong Chen]的文章
[Jing Shi]的文章
百度学术
百度学术中相似的文章
[Minglun Han]的文章
[Feilong Chen]的文章
[Jing Shi]的文章
必应学术
必应学术中相似的文章
[Minglun Han]的文章
[Feilong Chen]的文章
[Jing Shi]的文章
相关权益政策
暂无数据
收藏/分享
文件名: interspeech2023-speech.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。