CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共7条,第1-7条 帮助

限定条件        
已选(0)清除 条数/页:   排序方式:
基于编解码框架的端到端语音识别技术研究 学位论文
工学博士, 中国科学院自动化研究所: 中国科学院大学, 2020
作者:  董林昊
Adobe PDF(5860Kb)  |  收藏  |  浏览/下载:380/26  |  提交时间:2020/06/13
语音识别技术  神经网络  编解码框架  端到端建模  
CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition 会议论文
, 在线会议, 2020-05
作者:  Dong, Linhao;  Xu, Bo
浏览  |  Adobe PDF(641Kb)  |  收藏  |  浏览/下载:310/68  |  提交时间:2020/06/13
continuous integrate-and-fire  end-to-end model  soft and monotonic alignment  online speech recognition  acoustic boundary positioning  
Self-Attention Aligner: A Latency-Control End-to-End Model for ASR using Self-attention Network and Chunk-hopping 会议论文
, Brighton, United Kingdom, 2019-05
作者:  Dong, Linhao;  Wang, Feng;  Xu, Bo
浏览  |  Adobe PDF(930Kb)  |  收藏  |  浏览/下载:228/41  |  提交时间:2020/06/13
speech recognition  self-attention network  encoder-decoder  end-to-end  latency-control  
Extending Recurrent Neural Aligner for Streaming End-to-End Speech Recognition in Mandarin 会议论文
, Hyderabad, India, 2018-09
作者:  Dong, Linhao;  Zhou, Shiyu;  Chen, Wei;  Xu, Bo
浏览  |  Adobe PDF(321Kb)  |  收藏  |  浏览/下载:241/70  |  提交时间:2020/06/13
speech recognition  recurrent neural aligner  mandarin  end-to-end  
Speech-Transformer: A No-Recurrence Sequence-to-Sequence Model for Speech Recognition 会议论文
, Calgary, Canada, 2018-04
作者:  Dong, Linhao;  Xu, Shuang;  Xu, Bo
浏览  |  Adobe PDF(640Kb)  |  收藏  |  浏览/下载:806/480  |  提交时间:2020/06/13
speech recognition  sequence-to-sequence  attention  transformer  
Syllable-Based Sequence-to-Sequence Speech Recognition with the Transformer in Mandarin Chinese 会议论文
Interspeech, 印度的海德拉巴, 2018
作者:  Shiyu Zhou;  Linhao Dong;  Shuang Xu;  Bo Xu
收藏  |  浏览/下载:85/0  |  提交时间:2020/10/27
Asr  Multi-head Attention  Syllable Based Acoustic Modeling  Sequence-to-sequence  
A Comparison of Modeling Units in Sequence-to-Sequence Speech Recognition with the Transformer on Mandarin Chinese 会议论文
ICONIP, Siem Reap, Cambodia, 2018
作者:  Shiyu Zhou;  Linhao Dong;  Shuang Xu;  Bo Xu
收藏  |  浏览/下载:75/0  |  提交时间:2020/10/27
Asr  Multi-head Attention  Modeling Units  Sequence-to-sequence  Transformer