Knowledge Commons of Institute of Automation,CAS
Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games | |
Zhu, Yuanheng1,2![]() ![]() | |
发表期刊 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
![]() |
ISSN | 2162-237X |
2022-03-01 | |
卷号 | 33期号:3页码:1228-1241 |
摘要 | The Nash equilibrium is an important concept in game theory. It describes the least exploitability of one player from any opponents. We combine game theory, dynamic programming, and recent deep reinforcement learning (DRL) techniques to online learn the Nash equilibrium policy for two-player zero-sum Markov games (TZMGs). The problem is first formulated as a Bellman minimax equation, and generalized policy iteration (GPI) provides a double-loop iterative way to find the equilibrium. Then, neural networks are introduced to approximate Q functions for large-scale problems. An online minimax Q network learning algorithm is proposed to train the network with observations. Experience replay, dueling network, and double Q-learning are applied to improve the learning process. The contributions are twofold: 1) DRL techniques are combined with GPI to find the TZMG Nash equilibrium for the first time and 2) the convergence of the online learning algorithm with a lookup table and experience replay is proven, whose proof is not only useful for TZMGs but also instructive for single-agent Markov decision problems. Experiments on different examples validate the effectiveness of the proposed algorithm on TZMG problems. |
关键词 | Games Nash equilibrium Mathematical model Markov processes Convergence Dynamic programming Training Deep reinforcement learning (DRL) generalized policy iteration (GPI) Markov game (MG) Nash equilibrium Q network zero sum |
DOI | 10.1109/TNNLS.2020.3041469 |
关键词[WOS] | NONLINEAR-SYSTEMS ; GO ; ALGORITHM ; LEVEL |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Key Research and Development Program of China[2018AAA0102404] ; National Key Research and Development Program of China[2018AAA0101005] |
项目资助者 | National Key Research and Development Program of China |
WOS研究方向 | Computer Science ; Engineering |
WOS类目 | Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic |
WOS记录号 | WOS:000766269100030 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
七大方向——子方向分类 | 强化与进化学习 |
国重实验室规划方向分类 | 智能博弈与对手建模 |
是否有论文关联数据集需要存交 | 否 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/48235 |
专题 | 多模态人工智能系统全国重点实验室_深度强化学习 |
通讯作者 | Zhao, Dongbin |
作者单位 | 1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China |
第一作者单位 | 中国科学院自动化研究所 |
通讯作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Zhu, Yuanheng,Zhao, Dongbin. Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,33(3):1228-1241. |
APA | Zhu, Yuanheng,&Zhao, Dongbin.(2022).Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,33(3),1228-1241. |
MLA | Zhu, Yuanheng,et al."Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 33.3(2022):1228-1241. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Online_Minimax_Q_Net(2838KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | 浏览 |
个性服务 |
推荐该条目 |
保存到收藏夹 |
查看访问统计 |
导出为Endnote文件 |
谷歌学术 |
谷歌学术中相似的文章 |
[Zhu, Yuanheng]的文章 |
[Zhao, Dongbin]的文章 |
百度学术 |
百度学术中相似的文章 |
[Zhu, Yuanheng]的文章 |
[Zhao, Dongbin]的文章 |
必应学术 |
必应学术中相似的文章 |
[Zhu, Yuanheng]的文章 |
[Zhao, Dongbin]的文章 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论