CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共8条,第1-8条 帮助

限定条件        
已选(0)清除 条数/页:   排序方式:
Advantage Constrained Proximal Policy Optimization in Multi-Agent Reinforcement Learning 会议论文
, 昆士兰, 2023-6
作者:  Li WF(李伟凡);  Zhu YH(朱圆恒);  Zhao DB(赵冬斌)
Adobe PDF(4104Kb)  |  收藏  |  浏览/下载:224/73  |  提交时间:2023/06/29
multi-agent  reinforcement learning  policy gradient  
Enhanced Rolling Horizon Evolution Algorithm With Opponent Model Learning: Results for the Fighting Game AI Competition 期刊论文
IEEE TRANSACTIONS ON GAMES, 2023, 卷号: 5, 期号: 1, 页码: 5 - 15
作者:  Zhentao Tang;  Yuanheng Zhu;  Dongbin Zhao;  Simon M. Lucas
Adobe PDF(7686Kb)  |  收藏  |  浏览/下载:260/65  |  提交时间:2021/07/05
Rolling horizon evolution  opponent model  reinforcement learning  supervised learning  fighting game  
A Review of Computational Intelligence for StarCraft AI 会议论文
, Bangalore, India, 18-21 Nov. 2018
作者:  Tang, Zhentao;  Shao, Kun;  Zhu, Yuanheng;  Li, Dong;  Zhao, Dongbin;  Huang, Tingwen
浏览  |  Adobe PDF(131Kb)  |  收藏  |  浏览/下载:495/226  |  提交时间:2019/04/25
StarCraft Micromanagement With Reinforcement Learning and Curriculum Transfer Learning 期刊论文
IEEE Transactions on Emerging Topics in Computational Intelligence, 2019, 卷号: 3, 期号: 1, 页码: 73-84
作者:  Kun Shao;  Yuanheng Zhu;  Dongbin Zhao
浏览  |  Adobe PDF(4125Kb)  |  收藏  |  浏览/下载:342/132  |  提交时间:2019/04/22
Reinforcement Learning, Transfer Learning, Curriculum Learning, Neural Network, Game Ai  
Cooperative Reinforcement Learning for Multiple Units Combat in StarCraft 会议论文
, Honolulu, Hawaii, USA, Nov. 27 to Dec 1, 2017
作者:  Shao K(邵坤);  Zhu YH(朱圆恒);  Zhao DB(赵冬斌)
浏览  |  Adobe PDF(1378Kb)  |  收藏  |  浏览/下载:540/266  |  提交时间:2017/09/20
Online reinforcement learning for continuous-state systems 专著章节/文集论文
出自: Frontiers of Intelligent Control and Information Processing, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore:World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, 2014
作者:  Yuanheng Zhu;  Zhao DB(赵冬斌)
Adobe PDF(24150Kb)  |  收藏  |  浏览/下载:249/27  |  提交时间:2017/09/13
MEC-A Near-Optimal Online Reinforcement Learning Algorithm for Continuous Deterministic Systems 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 卷号: 26, 期号: 2, 页码: 346-356
作者:  Zhao, Dongbin;  Zhu, Yuanheng
浏览  |  Adobe PDF(2156Kb)  |  收藏  |  浏览/下载:271/110  |  提交时间:2015/09/18
Efficient Exploration  Probably Approximately Correct (Pac)  Reinforcement Learning (Rl)  State Aggregation  
连续状态系统的近似最优在线强化学习 学位论文
, 中国科学院自动化研究所: 中国科学院大学, 2015
作者:  朱圆恒
Adobe PDF(2679Kb)  |  收藏  |  浏览/下载:508/0  |  提交时间:2015/09/02
强化学习  最优控制  近似策略迭代  概率近似最优  连续状态系统  收敛性  在线学习  Kd树  Reinforcement Learning  Optimal Control  Approximate Policy Iteration  Probably Approximately Correct  Continuous-state System  Convergence  Online Learning  Kd-tree