CASIA OpenIR  > 复杂系统管理与控制国家重点实验室  > 平行控制
Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
Source PublicationIEEE TRANSACTIONS ON CYBERNETICS
2015-07-01
Volume45Issue:7Pages:1372-1385
SubtypeArticle
AbstractThe design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
KeywordApproximate Dynamic Programming (Adp) Neural Networks (Nns) Neuro-dynamic Programming Nonlinear Systems Optimal Control Reinforcement Learning (Rl) Robust Control
WOS HeadingsScience & Technology ; Technology
WOS KeywordDYNAMIC-PROGRAMMING ALGORITHM ; ADAPTIVE OPTIMAL-CONTROL ; TRACKING CONTROL ; ARCHITECTURE ; NETWORKS
Indexed BySCI
Language英语
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence ; Computer Science, Cybernetics
WOS IDWOS:000356386300013
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/7917
Collection复杂系统管理与控制国家重点实验室_平行控制
AffiliationChinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Liu, Derong,Yang, Xiong,Wang, Ding,et al. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints[J]. IEEE TRANSACTIONS ON CYBERNETICS,2015,45(7):1372-1385.
APA Liu, Derong,Yang, Xiong,Wang, Ding,&Wei, Qinglai.(2015).Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.IEEE TRANSACTIONS ON CYBERNETICS,45(7),1372-1385.
MLA Liu, Derong,et al."Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints".IEEE TRANSACTIONS ON CYBERNETICS 45.7(2015):1372-1385.
Files in This Item: Download All
File Name/Size DocType Version Access License
Reinforcement-Learni(1179KB)期刊论文出版稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Liu, Derong]'s Articles
[Yang, Xiong]'s Articles
[Wang, Ding]'s Articles
Baidu academic
Similar articles in Baidu academic
[Liu, Derong]'s Articles
[Yang, Xiong]'s Articles
[Wang, Ding]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Liu, Derong]'s Articles
[Yang, Xiong]'s Articles
[Wang, Ding]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.