CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共5条,第1-5条 帮助

限定条件                        
已选(0)清除 条数/页:   排序方式:
Momentum Acceleration in the Individual Convergence of Nonsmooth Convex Optimization With Constraints 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 卷号: 33, 期号: 3, 页码: 1107-1118
作者:  Tao, Wei;  Wu, Gao-Wei;  Tao, Qing
收藏  |  浏览/下载:186/0  |  提交时间:2022/06/10
Heavy-ball (HB) methods  individual convergence  machine learning  momentum methods  nonsmooth optimization  sparsity  
Classifying the tracing difficulty of 3D neuron image blocks based on deep learning 期刊论文
Brain Informatics, 2021, 卷号: 8, 期号: 1
作者:  Yang,Bin;  Huang,Jiajin;  Wu,Gaowei;  Yang,Jian
收藏  |  浏览/下载:165/0  |  提交时间:2021/12/28
Deep learning  Tracing difficulty classification  Residual neural network  Fully connected neural network  Long short-term memory network  
Exploring highly reliable substructures in auto-reconstructions of a neuron 期刊论文
Brain Informatics, 2021, 卷号: 8, 期号: 1
作者:  He,Yishan;  Huang,Jiajin;  Wu,Gaowei;  Yang,Jian
收藏  |  浏览/下载:141/0  |  提交时间:2021/11/03
Neuronal morphology  Reconstruction  Local alignment  Motif  
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 卷号: 31, 期号: 7, 页码: 2557-2568
作者:  Tao, Wei;  Pan, Zhisong;  Wu, Gaowei;  Tao, Qing
收藏  |  浏览/下载:225/0  |  提交时间:2020/08/03
Convergence  Extrapolation  Optimization  Acceleration  Machine learning  Task analysis  Machine learning algorithms  Individual convergence  machine learning  Nesterov's extrapolation  nonsmooth optimization  sparsity  
Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence 期刊论文
IEEE TRANSACTIONS ON CYBERNETICS, 2020, 卷号: 50, 期号: 2, 页码: 835-845
作者:  Tao, Wei;  Pan, Zhisong;  Wu, Gaowei;  Tao, Qing
收藏  |  浏览/下载:198/0  |  提交时间:2020/03/30
Convergence  Convex functions  Machine learning  Optimization methods  Linear programming  Cybernetics  Individual convergence  machine learning  mirror descent (MD) methods  regularized learning problems  stochastic gradient descent (SGD)  stochastic optimization