CASIA OpenIR

浏览/检索结果: 共9条,第1-9条 帮助

限定条件    
已选(0)清除 条数/页:   排序方式:
Towards Better Generalization of Deep Neural Networks via Non-Typicality Sampling Scheme 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 11
作者:  Peng, Xinyu;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:158/0  |  提交时间:2022/06/06
Training  Estimation  Deep learning  Standards  Optimization  Noise measurement  Convergence  Deep learning  generalization performance  nontypicality sampling scheme  stochastic gradient descent (SGD)  
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 期刊论文
IEEE/CAA Journal of Automatica Sinica, 2022, 卷号: 9, 期号: 5, 页码: 812-833
作者:  Xinlei Yi;  Shengjun Zhang;  Tao Yang;  Tianyou Chai;  Karl Henrik Johansson
Adobe PDF(2533Kb)  |  收藏  |  浏览/下载:174/49  |  提交时间:2022/04/24
Distributed nonconvex optimization  linear speedup  Polyak-Łojasiewicz (P-Ł) condition  primal-dual algorithm  stochastic gradient descent  
FCM-RDpA: TSK fuzzy regression model construction using fuzzy C-means clustering, regularization, Droprule, and Powerball Adabelief 期刊论文
INFORMATION SCIENCES, 2021, 卷号: 574, 页码: 490-504
作者:  Shi, Zhenhua;  Wu, Dongrui;  Guo, Chenfeng;  Zhao, Changming;  Cui, Yuqi;  Wang, Fei-Yue
收藏  |  浏览/下载:162/0  |  提交时间:2021/11/03
TSK fuzzy system  Mini-batch gradient descent  DropRule  Powerball AdaBelief  Fuzzy c-means clustering  
Drill the Cork of Information Bottleneck by Inputting the Most Important Data 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:  Peng, Xinyu;  Zhang, Jiawei;  Wang, Fei-Yue;  Li, Li
收藏  |  浏览/下载:178/0  |  提交时间:2022/01/27
Training  Signal to noise ratio  Mutual information  Optimization  Convergence  Deep learning  Tools  Information bottleneck (IB) theory  machine learning  minibatch stochastic gradient descent (SGD)  typicality sampling  
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning 期刊论文
IEEE/CAA Journal of Automatica Sinica, 2021, 卷号: 8, 期号: 2, 页码: 402-411
作者:  Xin Luo;  Wen Qin;  Ani Dong;  Khaled Sedraoui;  MengChu Zhou
Adobe PDF(5091Kb)  |  收藏  |  浏览/下载:152/53  |  提交时间:2021/04/09
Big data  industrial application  industrial data  latent factor analysis  machine learning  parallel algorithm  recommender system (RS)  stochastic gradient descent (SGD)  
Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 11, 页码: 4649-4659
作者:  Peng, Xinyu;  Li, Li;  Wang, Fei-Yue
收藏  |  浏览/下载:214/0  |  提交时间:2021/01/06
Training  Convergence  Approximation algorithms  Stochastic processes  Estimation  Optimization  Acceleration  Batch selection  machine learning  minibatch stochastic gradient descent (SGD)  speed of convergence  
Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence 期刊论文
IEEE TRANSACTIONS ON CYBERNETICS, 2020, 卷号: 50, 期号: 2, 页码: 835-845
作者:  Tao, Wei;  Pan, Zhisong;  Wu, Gaowei;  Tao, Qing
收藏  |  浏览/下载:188/0  |  提交时间:2020/03/30
Convergence  Convex functions  Machine learning  Optimization methods  Linear programming  Cybernetics  Individual convergence  machine learning  mirror descent (MD) methods  regularized learning problems  stochastic gradient descent (SGD)  stochastic optimization  
Monotonic type-2 fuzzy neural network and its application to thermal comfort prediction 期刊论文
NEURAL COMPUTING & APPLICATIONS, 2013, 卷号: 23, 期号: 7-8, 页码: 1987-1998
作者:  Li, Chengdong;  Yi, Jianqiang;  Wang, Ming;  Zhang, Guiqing
浏览  |  Adobe PDF(737Kb)  |  收藏  |  浏览/下载:238/77  |  提交时间:2015/08/12
Type-2 Fuzzy  Neural Network  Monotonicity  Constrained Least Squares Method  Gradient Descent Algorithm  Thermal Comfort  
Automated recognition of quasars based on adaptive radial basis function neural networks 期刊论文
SPECTROSCOPY AND SPECTRAL ANALYSIS, 2006, 卷号: 26, 期号: 2, 页码: 377-381
作者:  Zhao, MF;  Luo, AL;  Wu, FC;  Hu, ZY
收藏  |  浏览/下载:183/0  |  提交时间:2015/11/06
Galaxy  Quasar  Principal Component Analysis(Pca)  Radial Basis Function Neural Networks  K-means Clustering  Gradient Descent