Knowledge Commons of Institute of Automation,CAS
Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay | |
Luo, Biao1; Yang, Yin2; Liu, Derong3 | |
发表期刊 | IEEE TRANSACTIONS ON CYBERNETICS |
ISSN | 2168-2267 |
2018-12-01 | |
卷号 | 48期号:12页码:3337-3348 |
通讯作者 | Luo, Biao(biao.luo@hotmail.com) |
摘要 | In this paper, the data-based optimal output regulation problem of discrete-time systems is investigated. An off-policy adaptive Q-learning (QL) method is developed by using real system data without requiring the knowledge of system dynamics and the mathematical model of utility function. By introducing the Q-function, an off-policy adaptive QI, algorithm is developed to learn the optimal Q-function. An adaptive parameter alpha(i) in the policy evaluation is used to achieve tradeoff between the current and future Q-functions. The convergence of adaptive QI, algorithm is proved and the influence of the adaptive parameter is analyzed. To realize the adaptive QL algorithm with real system data, the actor-critic neural network (NN) structure is developed. The least-squares scheme and the batch gradient descent method are developed to update the critic and actor NN weights, respectively. The experience replay technique is employed in the learning process, which leads to simple and convenient implementation of the adaptive QL method. Finally, the effectiveness of the developed adaptive QL method is verified through numerical simulations. |
关键词 | Data-based experience replay neural networks (NNs) off-policy optimal control Q-learning (QL) |
DOI | 10.1109/TCYB.2018.2821369 |
关键词[WOS] | DISCRETE-TIME-SYSTEMS ; H-INFINITY CONTROL ; SPATIALLY DISTRIBUTED PROCESSES ; UNCERTAIN NONLINEAR-SYSTEMS ; BARRIER LYAPUNOV FUNCTIONS ; POLICY ITERATION ; CONTROL DESIGN ; CONTROLLER-DESIGN ; UNKNOWN DYNAMICS ; TRACKING CONTROL |
收录类别 | SCI |
语种 | 英语 |
资助项目 | Qatar National Research Fund under National Priority Research Project[NPRP9-466-1-103] ; National Natural Science Foundation of China[U1501251] ; National Natural Science Foundation of China[61533017] ; National Natural Science Foundation of China[61503377] ; National Natural Science Foundation of China[61503377] ; National Natural Science Foundation of China[61533017] ; National Natural Science Foundation of China[U1501251] ; Qatar National Research Fund under National Priority Research Project[NPRP9-466-1-103] |
项目资助者 | National Natural Science Foundation of China ; Qatar National Research Fund under National Priority Research Project |
WOS研究方向 | Automation & Control Systems ; Computer Science |
WOS类目 | Automation & Control Systems ; Computer Science, Artificial Intelligence ; Computer Science, Cybernetics |
WOS记录号 | WOS:000450613100007 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/22603 |
专题 | 多模态人工智能系统全国重点实验室_复杂系统智能机理与平行控制团队 |
通讯作者 | Luo, Biao |
作者单位 | 1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China 2.Hamad Bin Khalifa Univ, Coll Sci & Engn, Doha, Qatar 3.Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Guangdong, Peoples R China |
第一作者单位 | 中国科学院自动化研究所 |
通讯作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Luo, Biao,Yang, Yin,Liu, Derong. Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay[J]. IEEE TRANSACTIONS ON CYBERNETICS,2018,48(12):3337-3348. |
APA | Luo, Biao,Yang, Yin,&Liu, Derong.(2018).Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay.IEEE TRANSACTIONS ON CYBERNETICS,48(12),3337-3348. |
MLA | Luo, Biao,et al."Adaptive Q-Learning for Data-Based Optimal Output Regulation With Experience Replay".IEEE TRANSACTIONS ON CYBERNETICS 48.12(2018):3337-3348. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论