CASIA OpenIR  > 模式识别国家重点实验室  > 机器人视觉
Improving Deep Neural Networks by Using Sparse Dropout Strategy
Zheng Hao; Mingming Chen; Wenju Liu; Zhanlei Yang; Shan Liang; Hao Zheng
2014
Conference NameChinaSIP
Source PublicationChinaSIP
Conference Date2014
Conference PlaceXi an, Shanxi, China
AbstractRecently, deep neural networks(DNNs) have achieved excelleng results on benchmarks for acoustic modeling of speech recognition. By randomly discarding network units, a strategy which is called as dropout can improve the performance of DNNs by reducing the influence of over-fitting. However, the random dropout strategy treats units indiscriminately, which may lose information on distributions of units outputs. In this paper, we improve the dropout strategy by differential treatment to units according to their outputs. Only minor changes to an existing neural network system can achieve a significant improvement. Experiments of phone recognition on TIMIT show that the sparse dropout fine-tuning gets significant performance improvement.; Recently, deep neural networks(DNNs) have achieved excelleng results on benchmarks for acoustic modeling of speech recognition. By randomly discarding network units, a strategy which is called as dropout can improve the performance of DNNs by reducing the influence of over-fitting. However, the random dropout strategy treats units indiscriminately, which may lose information on distributions of units outputs. In this paper, we improve the dropout strategy by differential treatment to units according to their outputs. Only minor changes to an existing neural network system can achieve a significant improvement. Experiments of phone recognition on TIMIT show that the sparse dropout fine-tuning gets significant performance improvement.
KeywordDropout Sparse Dropout Deep Neural Networks Deep Learning
Indexed ByEI
Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/11776
Collection模式识别国家重点实验室_机器人视觉
Corresponding AuthorHao Zheng
AffiliationNational Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Zheng Hao,Mingming Chen,Wenju Liu,et al. Improving Deep Neural Networks by Using Sparse Dropout Strategy[C],2014.
Files in This Item:
File Name/Size DocType Version Access License
ChinaSIP-2014-1.pdf(146KB)会议论文 开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zheng Hao]'s Articles
[Mingming Chen]'s Articles
[Wenju Liu]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zheng Hao]'s Articles
[Mingming Chen]'s Articles
[Wenju Liu]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zheng Hao]'s Articles
[Mingming Chen]'s Articles
[Wenju Liu]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: ChinaSIP-2014-1.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.