Mixing Update Q-value for Deep Reinforcement Learning
Li Zhunan1,2; Hou Xinwen1
2019-09
会议名称International Joint Conference on Neural Networks (IJCNN)
页码1-6
会议日期2019/7/14-19
会议地点Budapest, Hungary
会议录编者/会议主办者IEEE
出版者IEEE
摘要

The value-based reinforcement learning methods are known to overestimate action values such as deep Q-learning, which could lead to suboptimal policies. This problem also persists in an actor-critic algorithm. In this paper, we propose a novel mechanism to minimize its effects on both the critic and the actor. Our mechanism builds on Double Q-learning, by mixing update action value based on the minimum and maximum between a pair of critics to limit the overestimation. We then propose a specific adaptation to the Twin Delayed Deep Deterministic policy gradient algorithm (TD3) and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several tasks.

学科门类工学
DOI10.1109/IJCNN.2019.8852397
收录类别EI
语种英语
七大方向——子方向分类强化与进化学习
引用统计
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/39160
专题复杂系统认知与决策实验室_智能系统与工程
通讯作者Hou Xinwen
作者单位1.Center for Research on Intelligent System and Engineering, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Li Zhunan,Hou Xinwen. Mixing Update Q-value for Deep Reinforcement Learning[C]//IEEE:IEEE,2019:1-6.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
PID5846947.pdf(468KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li Zhunan]的文章
[Hou Xinwen]的文章
百度学术
百度学术中相似的文章
[Li Zhunan]的文章
[Hou Xinwen]的文章
必应学术
必应学术中相似的文章
[Li Zhunan]的文章
[Hou Xinwen]的文章
相关权益政策
暂无数据
收藏/分享
文件名: PID5846947.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。