CASIA OpenIR  > 复杂系统认知与决策实验室  > 听觉模型与认知计算
Shifted Chunk Encoder for Transformer Based Streaming End-to-End ASR
Wang FY(王方圆); Xu B(徐波)
Conference NameICONIP 2022
Conference Date2022.11.28
Conference PlaceIndore,India

Currently, there are mainly three kinds of Transformer encoder based streaming End to End (E2E) Automatic Speech Recognition (ASR) approaches, namely time-restricted methods, chunk-wise methods, and memory-based methods. Generally, all of them have limitations in aspects of linear computational complexity, global context modeling, and parallel training. In this work, we aim to build a model to take all these three advantages for streaming Transformer ASR. Particularly, we propose a shifted chunk mechanism for the chunk-wise Transformer which provides cross-chunk connections between chunks. Therefore, the global context modeling ability of chunk-wise models can be significantly enhanced while all the original merits inherited.We integrate this scheme with the chunk-wise Transformer and Conformer, and identify them as SChunk-Transformer and SChunk-Conformer, respectively. Experiments on AISHELL-1 show that the SChunk-Transformer and SChunk-Conformer can respectively achieve CER 6.43% and 5.77%. And the linear complexity makes them possible to train with large batches and infer more efficiently. Our models can significantly outperform their conventional chunk-wise counterparts, while being competitive, with only 0.22 absolute CER drop, when compared with U2 which has quadratic complexity. A better CER can be achieved if compared with existing chunkwise or memory-based methods, such as HS-DACS and MMA. Code is released. (see

Sub direction classification语音识别与合成
planning direction of the national heavy laboratory语音语言处理
Paper associated data
Document Type会议论文
Corresponding AuthorWang FY(王方圆)
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Wang FY,Xu B. Shifted Chunk Encoder for Transformer Based Streaming End-to-End ASR[C],2022.
Files in This Item: Download All
File Name/Size DocType Version Access License
published-iconip2022(1374KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Wang FY(王方圆)]'s Articles
[Xu B(徐波)]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wang FY(王方圆)]'s Articles
[Xu B(徐波)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wang FY(王方圆)]'s Articles
[Xu B(徐波)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: published-iconip2022.pdf
Format: Adobe PDF
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.