Institutional Repository of Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT | |
Ye Bai![]() ![]() ![]() ![]() ![]() ![]() | |
Source Publication | IEEE/ACM Transactions on Audio, Speech, and Language Processing
![]() |
2021 | |
Issue | 29Pages:1897 - 1911 |
Abstract | Attention-based encoder-decoder (AED) models |
Keyword | 端到端语音识别、迁移学习、知识蒸馏、老师-学生学习、BERT、非自回归语音识别 |
Sub direction classification | 语音识别与合成 |
Document Type | 期刊论文 |
Identifier | http://ir.ia.ac.cn/handle/173211/44977 |
Collection | 模式识别国家重点实验室_智能交互 |
Affiliation | Institute of Automation, Chinese Academy of Sciences |
First Author Affilication | Institute of Automation, Chinese Academy of Sciences |
Recommended Citation GB/T 7714 | Ye Bai,Jiangyan Yi,Jianhua Tao,et al. Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing,2021(29):1897 - 1911. |
APA | Ye Bai,Jiangyan Yi,Jianhua Tao,Zhengkun Tian,Zhengqi Wen,&Shuai Zhang.(2021).Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT.IEEE/ACM Transactions on Audio, Speech, and Language Processing(29),1897 - 1911. |
MLA | Ye Bai,et al."Fast End-to-End Speech Recognition via Non-Autoregressive Models and Cross-Modal Knowledge Transferring from BERT".IEEE/ACM Transactions on Audio, Speech, and Language Processing .29(2021):1897 - 1911. |
Files in This Item: | ||||||
File Name/Size | DocType | Version | Access | License | ||
turn2-v0.2.pdf(1163KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | View |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment