POPO: Pessimistic Offline Policy Optimization
He Q(何强)1,2; Hou XW(侯新文)1; Liu Y(刘禹)1
2022-04
会议名称ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
会议日期23-27 May 2022
会议地点Singapore, Singapore
出版者IEEE
摘要

Offline reinforcement learning (RL) aims to optimize policy from large pre-recorded datasets without interaction with the environment. This setting offers the promise of utilizing diverse and static datasets to obtain policies without costly, risky, active exploration. However, commonly used off-policy deep RL methods perform poorly when facing arbitrary off-policy datasets. In this work, we show that there exists an estimation gap of value-based deep RL algorithms in the offline setting. To eliminate the estimation gap, we propose a novel offline RL algorithm that we term Pessimistic Offline Policy Optimization (POPO), which learns a pessimistic value function. To demonstrate the effectiveness of POPO, we perform experiments on various quality datasets. And we find that POPO performs surprisingly well and scales to tasks with high-dimensional state and action space, comparing or outperforming tested state-of-the-art offline RL algorithms on benchmark tasks.

关键词reinforcement learning offline optimization out-of-distribution
DOI10.1109/ICASSP43922.2022.9747886
URL查看原文
收录类别EI
语种英语
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/48891
专题多模态人工智能系统全国重点实验室_脑机融合与认知评估
通讯作者Hou XW(侯新文)
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
He Q,Hou XW,Liu Y. POPO: Pessimistic Offline Policy Optimization[C]:IEEE,2022.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
POPO_Pessimistic_Off(1200KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[He Q(何强)]的文章
[Hou XW(侯新文)]的文章
[Liu Y(刘禹)]的文章
百度学术
百度学术中相似的文章
[He Q(何强)]的文章
[Hou XW(侯新文)]的文章
[Liu Y(刘禹)]的文章
必应学术
必应学术中相似的文章
[He Q(何强)]的文章
[Hou XW(侯新文)]的文章
[Liu Y(刘禹)]的文章
相关权益政策
暂无数据
收藏/分享
文件名: POPO_Pessimistic_Offline_Policy_Optimization.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。