Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT
Ye Bai; Jiangyan Yi; Jianhua Tao; Zhengkun Tian; Zhengqi Wen; Shuai Zhang
发表期刊IEEE/ACM Transactions on Audio, Speech, and Language Processing
2021
期号29页码:1897 - 1911
摘要

Attention-based encoder-decoder (AED) models
have achieved promising performance in speech recognition.
However, because the decoder predicts text tokens (such as
characters or words) in an autoregressive manner, it is difficult
for an AED model to predict all tokens in parallel. This makes
the inference speed relatively slow. In contrast, we propose an
end-to-end non-autoregressive speech recognition model called
LASO (Listen Attentively, and Spell Once). The model aggre-
gates encoded speech features into the hidden representations
corresponding to each token with attention mechanisms. Thus,
the model can capture the token relations by self-attention on
the aggregated hidden representations from the whole speech
signal rather than autoregressive modeling on tokens. Without
explicitly autoregressive language modeling, this model predicts
all tokens in the sequence in parallel so that the inference is
efficient. Moreover, we propose a cross-modal transfer learning
method to use a text-modal language model to improve the
performance of speech-modal LASO by aligning token semantics.
We conduct experiments on two scales of public Chinese speech
datasets AISHELL-1 and AISHELL-2. Experimental results
show that our proposed model achieves a speedup of about 50×
and competitive performance, compared with the autoregressive
transformer models. And the cross-modal knowledge transferring
from the text-modal model can improve the performance of the
speech-modal model.

关键词端到端语音识别、迁移学习、知识蒸馏、老师-学生学习、BERT、非自回归语音识别
七大方向——子方向分类语音识别与合成
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/44977
专题多模态人工智能系统全国重点实验室_智能交互
作者单位Institute of Automation, Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Ye Bai,Jiangyan Yi,Jianhua Tao,et al. Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing,2021(29):1897 - 1911.
APA Ye Bai,Jiangyan Yi,Jianhua Tao,Zhengkun Tian,Zhengqi Wen,&Shuai Zhang.(2021).Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT.IEEE/ACM Transactions on Audio, Speech, and Language Processing(29),1897 - 1911.
MLA Ye Bai,et al."Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT".IEEE/ACM Transactions on Audio, Speech, and Language Processing .29(2021):1897 - 1911.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
turn2-v0.2.pdf(1163KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Ye Bai]的文章
[Jiangyan Yi]的文章
[Jianhua Tao]的文章
百度学术
百度学术中相似的文章
[Ye Bai]的文章
[Jiangyan Yi]的文章
[Jianhua Tao]的文章
必应学术
必应学术中相似的文章
[Ye Bai]的文章
[Jiangyan Yi]的文章
[Jianhua Tao]的文章
相关权益政策
暂无数据
收藏/分享
文件名: turn2-v0.2.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。