Knowledge Commons of Institute of Automation,CAS
Curiosity-Driven Exploration for Off-Policy Reinforcement Learning Methods | |
Li, Boyao1,2; Lu, Tao1; Li, Jiayi1,2; Lu, Ning1,2; Cai, Yinghao1; Wang, Shuo1,2 | |
2019-12 | |
会议名称 | IEEE International Conference on Robotics and Biomimetics |
会议日期 | 2019.12.06-2019.12.08 |
会议地点 | Dali, China |
摘要 | Deep reinforcement learning (DRL) has achieved remarkable results in many high-dimensional continuous control tasks. However, the RL agent still explores the environment randomly, resulting in low exploration efficiency and learning performance, especially in robotic manipulation tasks with sparse rewards. To address this problem, in this paper, we introduce a simplified Intrinsic Curiosity Module (S-ICM) into the off-policy RL methods to encourage the agent to pursue novel and surprising states for improving the exploration competence. This method can be combined with an arbitrary off-policy RL algorithm. We evaluate our approach on three challenging robotic manipulation tasks provided by OpenAI Gym. In our experiments, we combined our method with Deep Deterministic Policy Gradient (DDPG) with and without Hindsight Experience Replay (HER). The empirical results show that our proposed method significantly outperforms vanilla RL algorithms both in sample-efficiency and learning performance. |
收录类别 | EI |
语种 | 英语 |
七大方向——子方向分类 | 智能机器人 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/40235 |
专题 | 多模态人工智能系统全国重点实验室_智能机器人系统研究 |
通讯作者 | Lu, Tao |
作者单位 | 1.中国科学院自动化研究所 2.中国科学院大学 |
第一作者单位 | 中国科学院自动化研究所 |
通讯作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Li, Boyao,Lu, Tao,Li, Jiayi,et al. Curiosity-Driven Exploration for Off-Policy Reinforcement Learning Methods[C],2019. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
robio_final_IEEE.pdf(2877KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论