CASIA OpenIR  > 复杂系统认知与决策实验室  > 听觉模型与认知计算
PiCor: Multi-Task Deep Reinforcement Learning with Policy Correction
Bai FS(白丰硕)1,2; Zhang HM(张鸿铭)3; Tao TY(陶天阳)4; Wu ZH(武志亨)1,2; Wang YN(王燕娜)2; Xu B(徐博)2
2023-06
Conference Namethe AAAI Conference on Artificial Intelligence
Source PublicationProceedings of the AAAI Conference on Artificial Intelligence
Volume37
Issue6
Pages6728-6736
Conference Date2023.02.07 - 2023.02.14
Conference Place美国 华盛顿
Country美国
Author of SourceBrian Williams ; Sara Bernardini ; Yiling Chen ; Jennifer Neville
Publication Place美国
Contribution Rank1
Abstract

Multi-task deep reinforcement learning (DRL) ambitiously aims to train a general agent that masters multiple tasks simultaneously. However, varying learning speeds of different tasks compounding with negative gradient interference makes policy learning inefficient. In this work, we propose PiCor, an efficient multi-task DRL framework that splits learning into policy optimization and policy correction phases. The policy optimization phase improves the policy by any DRL algothrim on the sampled single task without considering other tasks. The policy correction phase first constructs a performance constraint set with adaptive weight adjusting. Then the intermediate policy learned by the first phase is constrained to the set, which controls the negative interference and balances the learning speeds across tasks. Empirically, we demonstrate that PiCor outperforms previous methods and significantly improves sample efficiency on simulated robotic manipulation and continuous control tasks. We additionally show that adaptive weight adjusting can further improve data efficiency and performance.

KeywordReinforcement Learning Algorithms Transfer Domain Adaptation Multi-Task Learning
Subject Area计算机科学技术 ; 人工智能 ; 人工智能其他学科
MOST Discipline Catalogue工学
DOIhttps://doi.org/10.1609/aaai.v37i6.25825
URL查看原文
Indexed ByEI
Language英语
WOS Research Area机器学习
WOS Subject人工智能
IS Representative Paper
Sub direction classification人工智能基础理论
planning direction of the national heavy laboratory人工智能基础前沿理论
Paper associated data
Citation statistics
Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/52322
Collection复杂系统认知与决策实验室_听觉模型与认知计算
Corresponding AuthorBai FS(白丰硕)
Affiliation1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences (CASIA)
3.University of Alberta
4.Université Paris-Saclay
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Bai FS,Zhang HM,Tao TY,et al. PiCor: Multi-Task Deep Reinforcement Learning with Policy Correction[C]//Brian Williams, Sara Bernardini, Yiling Chen, Jennifer Neville. 美国,2023:6728-6736.
Files in This Item: Download All
File Name/Size DocType Version Access License
PiCor final.pdf(1663KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Bai FS(白丰硕)]'s Articles
[Zhang HM(张鸿铭)]'s Articles
[Tao TY(陶天阳)]'s Articles
Baidu academic
Similar articles in Baidu academic
[Bai FS(白丰硕)]'s Articles
[Zhang HM(张鸿铭)]'s Articles
[Tao TY(陶天阳)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Bai FS(白丰硕)]'s Articles
[Zhang HM(张鸿铭)]'s Articles
[Tao TY(陶天阳)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: PiCor final.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.