Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games
Zhu, Yuanheng1,2; Zhao, Dongbin1,2
发表期刊IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
ISSN2162-237X
2022-03-01
卷号33期号:3页码:1228-1241
摘要

The Nash equilibrium is an important concept in game theory. It describes the least exploitability of one player from any opponents. We combine game theory, dynamic programming, and recent deep reinforcement learning (DRL) techniques to online learn the Nash equilibrium policy for two-player zero-sum Markov games (TZMGs). The problem is first formulated as a Bellman minimax equation, and generalized policy iteration (GPI) provides a double-loop iterative way to find the equilibrium. Then, neural networks are introduced to approximate Q functions for large-scale problems. An online minimax Q network learning algorithm is proposed to train the network with observations. Experience replay, dueling network, and double Q-learning are applied to improve the learning process. The contributions are twofold: 1) DRL techniques are combined with GPI to find the TZMG Nash equilibrium for the first time and 2) the convergence of the online learning algorithm with a lookup table and experience replay is proven, whose proof is not only useful for TZMGs but also instructive for single-agent Markov decision problems. Experiments on different examples validate the effectiveness of the proposed algorithm on TZMG problems.

关键词Games Nash equilibrium Mathematical model Markov processes Convergence Dynamic programming Training Deep reinforcement learning (DRL) generalized policy iteration (GPI) Markov game (MG) Nash equilibrium Q network zero sum
DOI10.1109/TNNLS.2020.3041469
关键词[WOS]NONLINEAR-SYSTEMS ; GO ; ALGORITHM ; LEVEL
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2018AAA0102404] ; National Key Research and Development Program of China[2018AAA0101005]
项目资助者National Key Research and Development Program of China
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:000766269100030
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类强化与进化学习
国重实验室规划方向分类智能博弈与对手建模
是否有论文关联数据集需要存交
引用统计
被引频次:26[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/48235
专题多模态人工智能系统全国重点实验室_深度强化学习
通讯作者Zhao, Dongbin
作者单位1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Zhu, Yuanheng,Zhao, Dongbin. Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,33(3):1228-1241.
APA Zhu, Yuanheng,&Zhao, Dongbin.(2022).Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,33(3),1228-1241.
MLA Zhu, Yuanheng,et al."Online Minimax Q Network Learning for Two-Player Zero-Sum Markov Games".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 33.3(2022):1228-1241.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Online_Minimax_Q_Net(2838KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zhu, Yuanheng]的文章
[Zhao, Dongbin]的文章
百度学术
百度学术中相似的文章
[Zhu, Yuanheng]的文章
[Zhao, Dongbin]的文章
必应学术
必应学术中相似的文章
[Zhu, Yuanheng]的文章
[Zhao, Dongbin]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Online_Minimax_Q_Network_Learning_for_Two-Player_Zero-Sum_Markov_Games.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。