CASIA OpenIR  > 模式识别国家重点实验室  > 智能交互
Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition
Zhengkun Tian1,2; Jiangyan Yi1; Jianhua Tao1,2,3; Shuai Zhang1,2; Zhengqi Wen1
Source PublicationIEEE SIGNAL PROCESSING LETTERS
2022-02
Pages762-766
Abstract

The autoregressive (AR) models, such as attention-based encoder-decoder models and RNN-Transducer, have achieved great success in speech recognition. They predict the output sequence conditioned on the previous tokens and acoustic encoded states, which is inefficient on GPUs. The non-autoregressive (NAR) models can get rid of the temporal dependency between the output tokens and predict the entire output tokens in one inference step. However, the NAR model still faces two major problems. Firstly, there is still a great gap in performance between the NAR models and the advanced AR models. Secondly, it's difficult for most of the NAR models to train and converge. We propose a hybrid autoregressive and non-autoregressive transformer (HANAT) model, which integrates AR and NAR models deeply by sharing parameters. We assume that the AR model will assist the NAR model to learn some linguistic dependencies and accelerate the convergence. Furthermore, the two-stage hybrid inference is applied to improve the model performance. All the experiments are conducted on a mandarin dataset ASIEHLL-1 and a english dataset librispeech-960 h. The results show that the HANAT can achieve a competitive performancewith the AR model and  outperform many complicated NAR models. Besides, the RTF is only 1/5 of the AR model.

Indexed BySCI
Language英语
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/48614
Collection模式识别国家重点实验室_智能交互
Corresponding AuthorJiangyan Yi; Jianhua Tao
Affiliation1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.CAS Center for Excellence in Brain Science and Intelligence Technology
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Zhengkun Tian,Jiangyan Yi,Jianhua Tao,et al. Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition[J]. IEEE SIGNAL PROCESSING LETTERS,2022:762-766.
APA Zhengkun Tian,Jiangyan Yi,Jianhua Tao,Shuai Zhang,&Zhengqi Wen.(2022).Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition.IEEE SIGNAL PROCESSING LETTERS,762-766.
MLA Zhengkun Tian,et al."Hybrid Autoregressive and Non-Autoregressive Transformer Models for Speech Recognition".IEEE SIGNAL PROCESSING LETTERS (2022):762-766.
Files in This Item: Download All
File Name/Size DocType Version Access License
Hybrid_Autoregressiv(934KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhengkun Tian]'s Articles
[Jiangyan Yi]'s Articles
[Jianhua Tao]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhengkun Tian]'s Articles
[Jiangyan Yi]'s Articles
[Jianhua Tao]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhengkun Tian]'s Articles
[Jiangyan Yi]'s Articles
[Jianhua Tao]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Hybrid_Autoregressive_and_Non-Autoregressive_Transformer_Models_for_Speech_Recognition.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.