Forward-Backward Decoding Sequence for Regularizing End-to-End TTS
Zheng, Yibin1,2; Tao, Jianhua3; Wen, Zhengqi1; Yi, Jiangyan1
发表期刊IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING
ISSN2329-9290
2019-12-01
卷号27期号:12页码:2067-2079
通讯作者Tao, Jianhua(jhtao@nlpr.ia.ac.cn)
摘要Neural end-to-end TTS such as Tacotron like network can generate very high-quality synthesized speech, and even close to human recording for similar domain text. However, it performs unsatisfactory when scaling it to some challenging test sets. One concern is that the encoder-decoder with attention-based network adopts autoregressive generative sequence model with the limitation of "exposure bias": errors made early could be quickly amplified, harming subsequent sequence generation. To address this issue, we propose two novel methods, which aim at predicting future by improving the agreement between forward and backward decoding sequence. The first one (denoted as MRBA) is achieved by adding divergence regularization terms to model training objective to maximize the agreement between two directional models, namely L2R (which generates targets from left-to-right) and R2L (which generates targets from right-to-left). While the second one (denoted as BDR) operates on decoder-level and exploits the future information during decoding. By introducing regularization term into the training objective of forward-backward decoders, the forward-decoder's hidden states are forced to be close to the backward-decoder's. Thus, the hidden representations of a unidirectional decoder are encouraged to embed some useful information about the future. Moreover, in order to make forward and backward decoding to improve each other in an interactive process, a joint training method is designed. Experimental results on both English and Mandarin dataset show that our proposed methods especially the second one (BDR), lead to a significantly improvement on both robustness and overall naturalness, as achieving obvious preference advantages in a challenging test, and achieving state-of-the-art performance (outperforming baseline "the revised version of Tacotron2" with a gap of 0.13 and 0.12 for English and Mandarin in MOS, respectively) on a general test.
关键词Decoding Training Speech processing Linguistics Acoustics Speech recognition Forward-backward regularization encoder-decoder with attention end-to-end joint-training TTS
DOI10.1109/TASLP.2019.2935807
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China (NSFC)[61425017] ; National Natural Science Foundation of China (NSFC)[61773379] ; National Natural Science Foundation of China (NSFC)[61603390] ; National Natural Science Foundation of China (NSFC)[61771472] ; National Key Research& Development Plan of China[2017YFC0820602] ; State Key Program of the National Natural Science Foundation of China (NSFC)[61831022] ; Inria-CAS Joint Research Project[173211KYSB20170061] ; National Natural Science Foundation of China (NSFC)[61425017] ; National Natural Science Foundation of China (NSFC)[61773379] ; National Natural Science Foundation of China (NSFC)[61603390] ; National Natural Science Foundation of China (NSFC)[61771472] ; National Key Research& Development Plan of China[2017YFC0820602] ; State Key Program of the National Natural Science Foundation of China (NSFC)[61831022] ; Inria-CAS Joint Research Project[173211KYSB20170061]
项目资助者National Natural Science Foundation of China (NSFC) ; National Key Research& Development Plan of China ; State Key Program of the National Natural Science Foundation of China (NSFC) ; Inria-CAS Joint Research Project
WOS研究方向Acoustics ; Engineering
WOS类目Acoustics ; Engineering, Electrical & Electronic
WOS记录号WOS:000492182400001
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类语音识别与合成
引用统计
被引频次:16[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/28883
专题多模态人工智能系统全国重点实验室_智能交互
通讯作者Tao, Jianhua
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Zheng, Yibin,Tao, Jianhua,Wen, Zhengqi,et al. Forward-Backward Decoding Sequence for Regularizing End-to-End TTS[J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING,2019,27(12):2067-2079.
APA Zheng, Yibin,Tao, Jianhua,Wen, Zhengqi,&Yi, Jiangyan.(2019).Forward-Backward Decoding Sequence for Regularizing End-to-End TTS.IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING,27(12),2067-2079.
MLA Zheng, Yibin,et al."Forward-Backward Decoding Sequence for Regularizing End-to-End TTS".IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING 27.12(2019):2067-2079.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zheng, Yibin]的文章
[Tao, Jianhua]的文章
[Wen, Zhengqi]的文章
百度学术
百度学术中相似的文章
[Zheng, Yibin]的文章
[Tao, Jianhua]的文章
[Wen, Zhengqi]的文章
必应学术
必应学术中相似的文章
[Zheng, Yibin]的文章
[Tao, Jianhua]的文章
[Wen, Zhengqi]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。