A New Pre-Training Paradigm for Offline Multi-Agent Reinforcement Learning with Suboptimal Data
Meng Linghui1,2; Zhang Xi1,2; Xing Dengpeng1,2; Xu Bo1,2
2024-04
会议名称EEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024
会议日期2024.4.14-2024.4.19
会议地点Seoul, Korea
摘要

Offline multi-agent reinforcement learning (MARL) with pre-training paradigm, which uses a large quantity of trajectories for offline pre-training and online deployment, has become fashionable lately. While performing well on various tasks, conventional pre-trained decision-making models based on imitation learning typically require many expert trajectories or demonstrations, which limits the development of pre-trained policies in multi-agent case. To address this problem, we propose a new setting, where a multi-agent policy is pre-trained offline using suboptimal (non-expert) data and then tested online with the expectation of high rewards. In this practical setting inspired by contrastive learning, we propose YANHUI, a simple yet effective framework utilizing a well-designed reward contrast function for multi-agent policy representation learning from a dataset including various reward-level data instead of just expert trajectories. Furthermore, we enrich the multi-agent policy pre-training with mixture-of-experts to dynamically represent it. With the same quantity of offline StarCraft Multi-Agent Challenge datasets, YANHUI achieves significant improvements over offline MARL baselines. In particular, our method surprisingly competes in performance with earlier state-of-the-art approaches, even with 10% of the expert data used by other baselines and the rest replaced by poor data.

七大方向——子方向分类多智能体系统
国重实验室规划方向分类多智能体决策
是否有论文关联数据集需要存交
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/57331
专题复杂系统认知与决策实验室_听觉模型与认知计算
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Meng Linghui,Zhang Xi,Xing Dengpeng,et al. A New Pre-Training Paradigm for Offline Multi-Agent Reinforcement Learning with Suboptimal Data[C],2024.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
yanhui_full_paper.pd(964KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Meng Linghui]的文章
[Zhang Xi]的文章
[Xing Dengpeng]的文章
百度学术
百度学术中相似的文章
[Meng Linghui]的文章
[Zhang Xi]的文章
[Xing Dengpeng]的文章
必应学术
必应学术中相似的文章
[Meng Linghui]的文章
[Zhang Xi]的文章
[Xing Dengpeng]的文章
相关权益政策
暂无数据
收藏/分享
文件名: yanhui_full_paper.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。