Learning State-Specific Action Masks for Reinforcement Learning
Wang ZY(王梓薏)1,2; Li XR(李欣然)1,2; Sun LY(孙罗洋)1,2; Zhang HF(张海峰)1,2,3; Liu HL(刘华林)4; Jun Wang5
发表期刊Algorithms
2024-01
卷号17期号:2页码:60
摘要

Efficient yet sufficient exploration remains a critical challenge in reinforcement learning (RL), especially for Markov Decision Processes (MDPs) with vast action spaces. Previous approaches have commonly involved projecting the original action space into a latent space or employing environmental action masks to reduce the action possibilities. Nevertheless, these methods often lack interpretability or rely on expert knowledge. In this study, we introduce a novel method for automatically reducing the action space in environments with discrete action spaces while preserving interpretability. The proposed approach learns state-specific masks with a dual purpose: (1) eliminating actions with minimal influence on the MDP and (2) aggregating actions with identical behavioral consequences within the MDP. Specifically, we introduce a novel concept called Bisimulation Metrics on Actions by States (BMAS) to quantify the behavioral consequences of actions within the MDP and design a dedicated mask model to ensure their binary nature. Crucially, we present a practical learning procedure for training the mask model, leveraging transition data collected by any RL policy. Our method is designed to be plug-and-play and adaptable to all RL policies, and to validate its effectiveness, an integration into two prominent RL algorithms, DQN and PPO, is performed. Experimental results obtained from Maze, Atari, and 𝜇𝜇RTS2 reveal a substantial acceleration in the RL learning process and noteworthy performance improvements facilitated by the introduced approach.

关键词reinforcement learning exploration efficiency space reduction
URL查看原文
收录类别EI
七大方向——子方向分类决策智能理论与方法
国重实验室规划方向分类多智能体决策
是否有论文关联数据集需要存交
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/58507
专题复杂系统认知与决策实验室_群体决策智能团队
通讯作者Zhang HF(张海峰)
作者单位1.Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
3.Nanjing Artificial Intelligence Research of IA, Jiangning District, Nanjing 211135, China
4.Key Laboratory of Oil & Gas Business Chain Optimization, Petrochina Planning and Engineering Institute, CNPC, Beijing 100083, China
5.Computer Science, University College London, London WC1E 6BT, UK
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Wang ZY,Li XR,Sun LY,et al. Learning State-Specific Action Masks for Reinforcement Learning[J]. Algorithms,2024,17(2):60.
APA Wang ZY,Li XR,Sun LY,Zhang HF,Liu HL,&Jun Wang.(2024).Learning State-Specific Action Masks for Reinforcement Learning.Algorithms,17(2),60.
MLA Wang ZY,et al."Learning State-Specific Action Masks for Reinforcement Learning".Algorithms 17.2(2024):60.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
algorithms-17-00060-(2976KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang ZY(王梓薏)]的文章
[Li XR(李欣然)]的文章
[Sun LY(孙罗洋)]的文章
百度学术
百度学术中相似的文章
[Wang ZY(王梓薏)]的文章
[Li XR(李欣然)]的文章
[Sun LY(孙罗洋)]的文章
必应学术
必应学术中相似的文章
[Wang ZY(王梓薏)]的文章
[Li XR(李欣然)]的文章
[Sun LY(孙罗洋)]的文章
相关权益政策
暂无数据
收藏/分享
文件名: algorithms-17-00060-mask.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。