Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay
Luo, Biao1; Yang, Yin2; Liu, Derong3
发表期刊IEEE TRANSACTIONS ON CYBERNETICS
ISSN2168-2267
2018-12-01
卷号48期号:12页码:3337-3348
通讯作者Luo, Biao(biao.luo@hotmail.com)
摘要In this paper, the data-based optimal output regulation problem of discrete-time systems is investigated. An off-policy adaptive Q-learning (QL) method is developed by using real system data without requiring the knowledge of system dynamics and the mathematical model of utility function. By introducing the Q-function, an off-policy adaptive QI, algorithm is developed to learn the optimal Q-function. An adaptive parameter alpha(i) in the policy evaluation is used to achieve tradeoff between the current and future Q-functions. The convergence of adaptive QI, algorithm is proved and the influence of the adaptive parameter is analyzed. To realize the adaptive QL algorithm with real system data, the actor-critic neural network (NN) structure is developed. The least-squares scheme and the batch gradient descent method are developed to update the critic and actor NN weights, respectively. The experience replay technique is employed in the learning process, which leads to simple and convenient implementation of the adaptive QL method. Finally, the effectiveness of the developed adaptive QL method is verified through numerical simulations.
关键词Data-based experience replay neural networks (NNs) off-policy optimal control Q-learning (QL)
DOI10.1109/TCYB.2018.2821369
关键词[WOS]DISCRETE-TIME-SYSTEMS ; H-INFINITY CONTROL ; SPATIALLY DISTRIBUTED PROCESSES ; UNCERTAIN NONLINEAR-SYSTEMS ; BARRIER LYAPUNOV FUNCTIONS ; POLICY ITERATION ; CONTROL DESIGN ; CONTROLLER-DESIGN ; UNKNOWN DYNAMICS ; TRACKING CONTROL
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[61503377] ; National Natural Science Foundation of China[61533017] ; National Natural Science Foundation of China[U1501251] ; Qatar National Research Fund under National Priority Research Project[NPRP9-466-1-103]
项目资助者National Natural Science Foundation of China ; Qatar National Research Fund under National Priority Research Project
WOS研究方向Automation & Control Systems ; Computer Science
WOS类目Automation & Control Systems ; Computer Science, Artificial Intelligence ; Computer Science, Cybernetics
WOS记录号WOS:000450613100007
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/22603
专题复杂系统管理与控制国家重点实验室_平行控制
通讯作者Luo, Biao
作者单位1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
2.Hamad Bin Khalifa Univ, Coll Sci & Engn, Doha, Qatar
3.Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Guangdong, Peoples R China
推荐引用方式
GB/T 7714
Luo, Biao,Yang, Yin,Liu, Derong. Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay[J]. IEEE TRANSACTIONS ON CYBERNETICS,2018,48(12):3337-3348.
APA Luo, Biao,Yang, Yin,&Liu, Derong.(2018).Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay.IEEE TRANSACTIONS ON CYBERNETICS,48(12),3337-3348.
MLA Luo, Biao,et al."Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay".IEEE TRANSACTIONS ON CYBERNETICS 48.12(2018):3337-3348.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Luo, Biao]的文章
[Yang, Yin]的文章
[Liu, Derong]的文章
百度学术
百度学术中相似的文章
[Luo, Biao]的文章
[Yang, Yin]的文章
[Liu, Derong]的文章
必应学术
必应学术中相似的文章
[Luo, Biao]的文章
[Yang, Yin]的文章
[Liu, Derong]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。