A Comparison of Modeling Units in Sequence-to-Sequence Speech Recognition with the Transformer on Mandarin Chinese
Shiyu Zhou; Linhao Dong; Shuang Xu; Bo Xu
2018
会议名称ICONIP
会议录名称ICONIP
期号2018
会议日期2018
会议地点Siem Reap, Cambodia
摘要

The choice of modeling units is critical to automatic speech
recognition (ASR) tasks. Conventional ASR systems typically choose
context-dependent states (CD-states) or context-dependent phonemes
(CD-phonemes) as their modeling units. However, it has been challenged
by sequence-to-sequence attention-based models. On English ASR
tasks, previous attempts have already shown that the modeling unit of
graphemes can outperform that of phonemes by sequence-to-sequence
attention-based model. In this paper, we are concerned with modeling
units on Mandarin Chinese ASR tasks using sequence-to-sequence
attention-based models with the Transformer. Five modeling units are
explored including context-independent phonemes (CI-phonemes), syllables,
words, sub-words and characters. Experiments on HKUST datasets
demonstrate that the lexicon free modeling units can outperform lexicon
related modeling units in terms of character error rate (CER). Among
five modeling units, character based model performs best and establishes
a new state-of-the-art CER of 26.64% on HKUST datasets.

关键词Asr Multi-head Attention Modeling Units Sequence-to-sequence Transformer
学科门类工学::计算机科学与技术(可授工学、理学学位)
收录类别EI
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/41001
专题复杂系统认知与决策实验室_听觉模型与认知计算
通讯作者Shiyu Zhou
推荐引用方式
GB/T 7714
Shiyu Zhou,Linhao Dong,Shuang Xu,et al. A Comparison of Modeling Units in Sequence-to-Sequence Speech Recognition with the Transformer on Mandarin Chinese[C],2018.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Shiyu Zhou]的文章
[Linhao Dong]的文章
[Shuang Xu]的文章
百度学术
百度学术中相似的文章
[Shiyu Zhou]的文章
[Linhao Dong]的文章
[Shuang Xu]的文章
必应学术
必应学术中相似的文章
[Shiyu Zhou]的文章
[Linhao Dong]的文章
[Shuang Xu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。