Knowledge Commons of Institute of Automation,CAS
A New Pre-Training Paradigm for Offline Multi-Agent Reinforcement Learning with Suboptimal Data | |
Meng Linghui1,2![]() ![]() ![]() | |
2024-04 | |
会议名称 | EEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2024 |
会议日期 | 2024.4.14-2024.4.19 |
会议地点 | Seoul, Korea |
摘要 | Offline multi-agent reinforcement learning (MARL) with pre-training paradigm, which uses a large quantity of trajectories for offline pre-training and online deployment, has become fashionable lately. While performing well on various tasks, conventional pre-trained decision-making models based on imitation learning typically require many expert trajectories or demonstrations, which limits the development of pre-trained policies in multi-agent case. To address this problem, we propose a new setting, where a multi-agent policy is pre-trained offline using suboptimal (non-expert) data and then tested online with the expectation of high rewards. In this practical setting inspired by contrastive learning, we propose YANHUI, a simple yet effective framework utilizing a well-designed reward contrast function for multi-agent policy representation learning from a dataset including various reward-level data instead of just expert trajectories. Furthermore, we enrich the multi-agent policy pre-training with mixture-of-experts to dynamically represent it. With the same quantity of offline StarCraft Multi-Agent Challenge datasets, YANHUI achieves significant improvements over offline MARL baselines. In particular, our method surprisingly competes in performance with earlier state-of-the-art approaches, even with 10% of the expert data used by other baselines and the rest replaced by poor data. |
七大方向——子方向分类 | 多智能体系统 |
国重实验室规划方向分类 | 多智能体决策 |
是否有论文关联数据集需要存交 | 否 |
文献类型 | 会议论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/57331 |
专题 | 复杂系统认知与决策实验室_听觉模型与认知计算 |
作者单位 | 1.Institute of Automation, Chinese Academy of Sciences 2.School of Artificial Intelligence, University of Chinese Academy of Sciences |
第一作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Meng Linghui,Zhang Xi,Xing Dengpeng,et al. A New Pre-Training Paradigm for Offline Multi-Agent Reinforcement Learning with Suboptimal Data[C],2024. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
yanhui_full_paper.pd(964KB) | 会议论文 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论