CASIA OpenIR  > 数字内容技术与服务研究中心  > 听觉模型与认知计算
Stochastic Learning via Optimizing the Variational Inequalities
Tao, Qing1; Gao, Qian-Kun1; Chu, De-Jun1; Wu, Gao-Wei2
Source PublicationIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
2014-10-01
Volume25Issue:10Pages:1769-1778
SubtypeArticle
AbstractA wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/root t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed.
KeywordAlternating Direction Method Of Multiplier (Admm) Machine Learning Online Learning Optimization Regret Stochastic Learning Variational Inequality (Vi)
WOS HeadingsScience & Technology ; Technology
WOS KeywordCONVEX-OPTIMIZATION ; CONVERGENCE
Indexed BySCI
Language英语
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS IDWOS:000343704900003
Citation statistics
Cited Times:5[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/8018
Collection数字内容技术与服务研究中心_听觉模型与认知计算
Affiliation1.New Star Inst Appl Technol, Hefei 230031, Peoples R China
2.Chinese Acad Sci, Inst Automat, Beijing 100049, Peoples R China
Recommended Citation
GB/T 7714
Tao, Qing,Gao, Qian-Kun,Chu, De-Jun,et al. Stochastic Learning via Optimizing the Variational Inequalities[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2014,25(10):1769-1778.
APA Tao, Qing,Gao, Qian-Kun,Chu, De-Jun,&Wu, Gao-Wei.(2014).Stochastic Learning via Optimizing the Variational Inequalities.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,25(10),1769-1778.
MLA Tao, Qing,et al."Stochastic Learning via Optimizing the Variational Inequalities".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 25.10(2014):1769-1778.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Tao, Qing]'s Articles
[Gao, Qian-Kun]'s Articles
[Chu, De-Jun]'s Articles
Baidu academic
Similar articles in Baidu academic
[Tao, Qing]'s Articles
[Gao, Qian-Kun]'s Articles
[Chu, De-Jun]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Tao, Qing]'s Articles
[Gao, Qian-Kun]'s Articles
[Chu, De-Jun]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.