Stochastic Learning via Optimizing the Variational Inequalities
Tao, Qing1; Gao, Qian-Kun1; Chu, De-Jun1; Wu, Gao-Wei2
2014-10-01
发表期刊IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷号25期号:10页码:1769-1778
文章类型Article
摘要A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/root t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed.
关键词Alternating Direction Method Of Multiplier (Admm) Machine Learning Online Learning Optimization Regret Stochastic Learning Variational Inequality (Vi)
WOS标题词Science & Technology ; Technology
关键词[WOS]CONVEX-OPTIMIZATION ; CONVERGENCE
收录类别SCI
语种英语
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:000343704900003
引用统计
被引频次:5[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/8018
专题数字内容技术与服务研究中心_听觉模型与认知计算
作者单位1.New Star Inst Appl Technol, Hefei 230031, Peoples R China
2.Chinese Acad Sci, Inst Automat, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Tao, Qing,Gao, Qian-Kun,Chu, De-Jun,et al. Stochastic Learning via Optimizing the Variational Inequalities[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2014,25(10):1769-1778.
APA Tao, Qing,Gao, Qian-Kun,Chu, De-Jun,&Wu, Gao-Wei.(2014).Stochastic Learning via Optimizing the Variational Inequalities.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,25(10),1769-1778.
MLA Tao, Qing,et al."Stochastic Learning via Optimizing the Variational Inequalities".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 25.10(2014):1769-1778.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Tao, Qing]的文章
[Gao, Qian-Kun]的文章
[Chu, De-Jun]的文章
百度学术
百度学术中相似的文章
[Tao, Qing]的文章
[Gao, Qian-Kun]的文章
[Chu, De-Jun]的文章
必应学术
必应学术中相似的文章
[Tao, Qing]的文章
[Gao, Qian-Kun]的文章
[Chu, De-Jun]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。