CASIA OpenIR

浏览/检索结果: 共3条,第1-3条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 11
作者:  Peng, Xinyu;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:163/0  |  提交时间:2022/06/06
Training  Estimation  Deep learning  Standards  Optimization  Noise measurement  Convergence  Deep learning  generalization performance  nontypicality sampling scheme  stochastic gradient descent (SGD)  
Drill the Cork of Information Bottleneck by Inputting the Most Important Data 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:  Peng, Xinyu;  Zhang, Jiawei;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:187/0  |  提交时间:2022/01/27
Training  Signal to noise ratio  Mutual information  Optimization  Convergence  Deep learning  Tools  Information bottleneck (IB) theory  machine learning  minibatch stochastic gradient descent (SGD)  typicality sampling  
Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 11, 页码: 4649-4659
作者:  Peng, Xinyu;  Li, Li;  Wang, Fei-Yue
收藏  |  浏览/下载:222/0  |  提交时间:2021/01/06
Training  Convergence  Approximation algorithms  Stochastic processes  Estimation  Optimization  Acceleration  Batch selection  machine learning  minibatch stochastic gradient descent (SGD)  speed of convergence