Curiosity-Driven Exploration for Off-Policy Reinforcement Learning Methods
Li, Boyao1,2; Lu, Tao1; Li, Jiayi1,2; Lu, Ning1,2; Cai, Yinghao1; Wang, Shuo1,2
2019-12
会议名称IEEE International Conference on Robotics and Biomimetics
会议日期2019.12.06-2019.12.08
会议地点Dali, China
摘要

Deep reinforcement learning (DRL) has achieved remarkable results in many high-dimensional continuous control tasks. However, the RL agent still explores the environment randomly, resulting in low exploration efficiency and learning performance, especially in robotic manipulation tasks with sparse rewards. To address this problem, in this paper, we introduce a simplified Intrinsic Curiosity Module (S-ICM) into the off-policy RL methods to encourage the agent to pursue novel and surprising states for improving the exploration competence. This method can be combined with an arbitrary off-policy RL algorithm. We evaluate our approach on three challenging robotic manipulation tasks provided by OpenAI Gym. In our experiments, we combined our method with Deep Deterministic Policy Gradient (DDPG) with and without Hindsight Experience Replay (HER). The empirical results show that our proposed method significantly outperforms vanilla RL algorithms both in sample-efficiency and learning performance.

收录类别EI
语种英语
七大方向——子方向分类智能机器人
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/40235
专题多模态人工智能系统全国重点实验室_智能机器人系统研究
通讯作者Lu, Tao
作者单位1.中国科学院自动化研究所
2.中国科学院大学
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Li, Boyao,Lu, Tao,Li, Jiayi,et al. Curiosity-Driven Exploration for Off-Policy Reinforcement Learning Methods[C],2019.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
robio_final_IEEE.pdf(2877KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li, Boyao]的文章
[Lu, Tao]的文章
[Li, Jiayi]的文章
百度学术
百度学术中相似的文章
[Li, Boyao]的文章
[Lu, Tao]的文章
[Li, Jiayi]的文章
必应学术
必应学术中相似的文章
[Li, Boyao]的文章
[Lu, Tao]的文章
[Li, Jiayi]的文章
相关权益政策
暂无数据
收藏/分享
文件名: robio_final_IEEE.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。