Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition
Zhengkun Tian1,2; Jiangyan Yi1; Jianhua Tao1,2,3; Shuai Zhang1,2; Zhengqi Wen1
发表期刊IEEE SIGNAL PROCESSING LETTERS
2022-02
页码762-766
摘要

The autoregressive (AR) models, such as attention-based encoder-decoder models and RNN-Transducer, have achieved great success in speech recognition. They predict the output sequence conditioned on the previous tokens and acoustic encoded states, which is inefficient on GPUs. The non-autoregressive (NAR) models can get rid of the temporal dependency between the output tokens and predict the entire output tokens in one inference step. However, the NAR model still faces two major problems. Firstly, there is still a great gap in performance between the NAR models and the advanced AR models. Secondly, it's difficult for most of the NAR models to train and converge. We propose a hybrid autoregressive and non-autoregressive transformer (HANAT) model, which integrates AR and NAR models deeply by sharing parameters. We assume that the AR model will assist the NAR model to learn some linguistic dependencies and accelerate the convergence. Furthermore, the two-stage hybrid inference is applied to improve the model performance. All the experiments are conducted on a mandarin dataset ASIEHLL-1 and a english dataset librispeech-960 h. The results show that the HANAT can achieve a competitive performancewith the AR model and  outperform many complicated NAR models. Besides, the RTF is only 1/5 of the AR model.

收录类别SCI
语种英语
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/48614
专题多模态人工智能系统全国重点实验室_智能交互
通讯作者Jiangyan Yi; Jianhua Tao
作者单位1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.CAS Center for Excellence in Brain Science and Intelligence Technology
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Zhengkun Tian,Jiangyan Yi,Jianhua Tao,et al. Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition[J]. IEEE SIGNAL PROCESSING LETTERS,2022:762-766.
APA Zhengkun Tian,Jiangyan Yi,Jianhua Tao,Shuai Zhang,&Zhengqi Wen.(2022).Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition.IEEE SIGNAL PROCESSING LETTERS,762-766.
MLA Zhengkun Tian,et al."Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition".IEEE SIGNAL PROCESSING LETTERS (2022):762-766.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Hybrid_Autoregressiv(934KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhengkun Tian]的文章
[Jiangyan Yi]的文章
[Jianhua Tao]的文章
百度学术
百度学术中相似的文章
[Zhengkun Tian]的文章
[Jiangyan Yi]的文章
[Jianhua Tao]的文章
必应学术
必应学术中相似的文章
[Zhengkun Tian]的文章
[Jiangyan Yi]的文章
[Jianhua Tao]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Hybrid_Autoregressive_and_Non-Autoregressive_Transformer_Models_for_Speech_Recognition.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。