CASIA OpenIR  > 学术期刊  > IEEE/CAA Journal of Automatica Sinica
Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework
Majid Mazouchi; Subramanya Nageshrao; Hamidreza Modares
发表期刊IEEE/CAA Journal of Automatica Sinica
ISSN2329-9266
2022
卷号9期号:3页码:466-481
摘要In this paper, a data-driven conflict-aware safe reinforcement learning (CAS-RL) algorithm is presented for control of autonomous systems. Existing safe RL results with pre-defined performance functions and safe sets can only provide safety and performance guarantees for a single environment or circumstance. By contrast, the presented CAS-RL algorithm provides safety and performance guarantees across a variety of circumstances that the system might encounter. This is achieved by utilizing a bilevel learning control architecture: A higher meta-cognitive layer leverages a data-driven receding-horizon attentional controller (RHAC) to adapt relative attention to different system’s safety and performance requirements, and, a lower-layer RL controller designs control actuation signals for the system. The presented RHAC makes its meta decisions based on the reaction curve of the lower-layer RL controller using a meta-model or knowledge. More specifically, it leverages a prediction meta-model (PMM) which spans the space of all future meta trajectories using a given finite number of past meta trajectories. RHAC will adapt the system’s aspiration towards performance metrics (e.g., performance weights) as well as safety boundaries to resolve conflicts that arise as mission scenarios develop. This will guarantee safety and feasibility (i.e., performance boundness) of the lower-layer RL-based control solution. It is shown that the interplay between the RHAC and the lower-layer RL controller is a bilevel optimization problem for which the leader (RHAC) operates at a lower rate than the follower (RL-based controller) and its solution guarantees feasibility and safety of the control solution. The effectiveness of the proposed framework is verified through a simulation example.
关键词Optimal control receding-horizon attentional controller (RHAC) reinforcement learning (RL)
DOI10.1109/JAS.2021.1004353
引用统计
被引频次:11[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/47208
专题学术期刊_IEEE/CAA Journal of Automatica Sinica
推荐引用方式
GB/T 7714
Majid Mazouchi,Subramanya Nageshrao,Hamidreza Modares. Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework[J]. IEEE/CAA Journal of Automatica Sinica,2022,9(3):466-481.
APA Majid Mazouchi,Subramanya Nageshrao,&Hamidreza Modares.(2022).Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework.IEEE/CAA Journal of Automatica Sinica,9(3),466-481.
MLA Majid Mazouchi,et al."Conflict-Aware Safe Reinforcement Learning: A Meta-Cognitive Learning Framework".IEEE/CAA Journal of Automatica Sinica 9.3(2022):466-481.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
JAS-2021-0641.pdf(11662KB)期刊论文出版稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Majid Mazouchi]的文章
[Subramanya Nageshrao]的文章
[Hamidreza Modares]的文章
百度学术
百度学术中相似的文章
[Majid Mazouchi]的文章
[Subramanya Nageshrao]的文章
[Hamidreza Modares]的文章
必应学术
必应学术中相似的文章
[Majid Mazouchi]的文章
[Subramanya Nageshrao]的文章
[Hamidreza Modares]的文章
相关权益政策
暂无数据
收藏/分享
文件名: JAS-2021-0641.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。