CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共12条,第1-10条 帮助

限定条件        
已选(0)清除 条数/页:   排序方式:
FM3Q: Factorized Multi-Agent MiniMax Q-Learning for Two-Team Zero-Sum Markov Game 期刊论文
IEEE Transactions on Emerging Topics in Computational Intelligence, 2024, 页码: 1-13
作者:  Guangzheng Hu;  Yuanheng Zhu;  Haoran Li;  Dongbin Zhao
Adobe PDF(2144Kb)  |  收藏  |  浏览/下载:53/11  |  提交时间:2024/06/05
Games  Q-learning  Task analysis  Optimization  Convergence  Training  Nash equilibrium  Multi-agent reinforcement learning  minimax-Q learning  two-team zero-sum Markov games  
Empirical Policy Optimization for n-Player Markov Games 期刊论文
IEEE Transactions on Cybernetics, 2022, 页码: doi={10.1109/TCYB.2022.3179775}
作者:  Yuanheng Zhu;  Weifan Li;  Mengchen Zhao;  Jianye Hao;  Dongbin Zhao
Adobe PDF(1739Kb)  |  收藏  |  浏览/下载:117/48  |  提交时间:2023/04/26
Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 卷号: 33, 期号: 3, 页码: 1228-1241
作者:  Zhu, Yuanheng;  Zhao, Dongbin
Adobe PDF(2838Kb)  |  收藏  |  浏览/下载:260/16  |  提交时间:2022/06/10
Games  Nash equilibrium  Mathematical model  Markov processes  Convergence  Dynamic programming  Training  Deep reinforcement learning (DRL)  generalized policy iteration (GPI)  Markov game (MG)  Nash equilibrium  Q network  zero sum  
StarCraft Micromanagement With Reinforcement Learning and Curriculum Transfer Learning 期刊论文
IEEE Transactions on Emerging Topics in Computational Intelligence, 2019, 卷号: 3, 期号: 1, 页码: 73-84
作者:  Kun Shao;  Yuanheng Zhu;  Dongbin Zhao
浏览  |  Adobe PDF(4125Kb)  |  收藏  |  浏览/下载:368/138  |  提交时间:2019/04/22
Reinforcement Learning, Transfer Learning, Curriculum Learning, Neural Network, Game Ai  
Thermal Comfort Control Based on MEC Algorithm for HVAC System 会议论文
, Killarney, Ireland, 12-17 July 2015
作者:  Li, Dong;  Zhao, Dongbin;  Zhu, Yuanheng;  Xia, Zhongpu
浏览  |  Adobe PDF(895Kb)  |  收藏  |  浏览/下载:206/83  |  提交时间:2017/12/28
A data-based online reinforcement learning algorithm with high-efficient exploration 会议论文
, Orlando, FL, USA, Dec, 2014
作者:  Yuanheng Zhu;  Zhao DB(赵冬斌)
浏览  |  Adobe PDF(407Kb)  |  收藏  |  浏览/下载:223/85  |  提交时间:2017/09/13
Online reinforcement learning for continuous-state systems 专著章节/文集论文
出自: Frontiers of Intelligent Control and Information Processing, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore, Singapore:World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, World Scientific, 2014
作者:  Yuanheng Zhu;  Zhao DB(赵冬斌)
Adobe PDF(24150Kb)  |  收藏  |  浏览/下载:288/38  |  提交时间:2017/09/13
Convergence Proof of Approximate Policy Iteration for Undiscounted Optimal Control of Discrete-Time Systems 期刊论文
COGNITIVE COMPUTATION, 2015, 卷号: 7, 期号: 6, 页码: 763-771
作者:  Zhu, Yuanheng;  Zhao, Dongbin;  He, Haibo;  Ji, Junhong
Adobe PDF(809Kb)  |  收藏  |  浏览/下载:279/44  |  提交时间:2016/01/18
Approximate Policy Iteration  Approximation Error  Optimal Control  Fuzzy Approximator  
A data-based online reinforcement learning algorithm satisfying probably approximately correct principle 期刊论文
NEURAL COMPUTING & APPLICATIONS, 2015, 卷号: 26, 期号: 4, 页码: 775-787
作者:  Zhu, Yuanheng;  Zhao, Dongbin
Adobe PDF(1331Kb)  |  收藏  |  浏览/下载:286/64  |  提交时间:2015/09/21
Reinforcement Learning  Probably Approximately Correct  Kd-tree  
MEC-A Near-Optimal Online Reinforcement Learning Algorithm for Continuous Deterministic Systems 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 卷号: 26, 期号: 2, 页码: 346-356
作者:  Zhao, Dongbin;  Zhu, Yuanheng
Adobe PDF(2156Kb)  |  收藏  |  浏览/下载:302/118  |  提交时间:2015/09/18
Efficient Exploration  Probably Approximately Correct (Pac)  Reinforcement Learning (Rl)  State Aggregation