Guided Policy Search for Sequential Multitask Learning
Xiong, Fangzhou1,2; Sun, Biao3; Yang, Xu1,2; Qiao, Hong1,2,3,4,5; Huang, Kaizhu6; Hussain, Amir7; Liu, Zhiyong1,2,4,5
Source PublicationIEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
ISSN2168-2216
2019
Volume49Issue:1Pages:216-226
Abstract

Policy search in reinforcement learning (RL) is a practical approach to interact directly with environments in parameter spaces, that often deal with dilemmas of local optima and real-time sample collection. A promising algorithm, known as guided policy search (GPS), is capable of handling the challenge of training samples using trajectory-centric methods. It can also provide asymptotic local convergence guarantees. However, in its current form, the GPS algorithm cannot operate in sequential multitask learning scenarios. This is due to its batch-style training requirement, where all training samples are collectively provided at the start of the learning process. The algorithm's adaptation is thus hindered for real-time applications, where training samples or tasks can arrive randomly. In this paper, the GPS approach is reformulated, by adapting a recently proposed, lifelong-learning method, and elastic weight consolidation. Specifically, Fisher information is incorporated to impart knowledge from previously learned tasks. The proposed algorithm, termed sequential multitask learning-GPS, is able to operate in sequential multitask learning settings and ensuring continuous policy learning, without catastrophic forgetting. Pendulum and robotic manipulation experiments demonstrate the new algorithms efficacy to learn control policies for handling sequentially arriving training samples, delivering comparable performance to the traditional, and batch-based GPS algorithm. In conclusion, the proposed algorithm is posited as a new benchmark for the real-time RL and robotics research community.

KeywordElastic weight consolidation (EWC) guided policy search (GPS) reinforcement learning (RL) sequential multitask learning
DOI10.1109/TSMC.2018.2800040
Indexed BySCI
Language英语
Funding ProjectU.K. Engineering and Physical Sciences Research Council[EP/M026981/1] ; Strategic Priority Research Program of the Chinese Academy of Science[XDB02080003] ; MOST[2015BAK35B01] ; MOST[2015BAK35B00] ; NSFC[61473236] ; NSFC[61702516] ; NSFC[91648205] ; NSFC[61627808] ; NSFC[61210009] ; NSFC[61503383] ; NSFC[61375005] ; NSFC[U1613213] ; Key Program Special Fund in XJTLU[KSF-A-01] ; Suzhou Science and Technology Program[SZS201613] ; Suzhou Science and Technology Program[SYG201712] ; Guangdong Science and Technology Department[2016B090910001] ; National Key Research and Development Plan of China[2016YFC0300801] ; National Key Research and Development Plan of China[2017YFB1300202] ; National Key Research and Development Plan of China[2017YFB1300202] ; National Key Research and Development Plan of China[2016YFC0300801] ; Guangdong Science and Technology Department[2016B090910001] ; Suzhou Science and Technology Program[SYG201712] ; Suzhou Science and Technology Program[SZS201613] ; Key Program Special Fund in XJTLU[KSF-A-01] ; NSFC[U1613213] ; NSFC[61375005] ; NSFC[61503383] ; NSFC[61210009] ; NSFC[61627808] ; NSFC[91648205] ; NSFC[61702516] ; NSFC[61473236] ; MOST[2015BAK35B00] ; MOST[2015BAK35B01] ; Strategic Priority Research Program of the Chinese Academy of Science[XDB02080003] ; U.K. Engineering and Physical Sciences Research Council[EP/M026981/1]
WOS Research AreaAutomation & Control Systems ; Computer Science
WOS SubjectAutomation & Control Systems ; Computer Science, Cybernetics
WOS IDWOS:000454241100019
PublisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Citation statistics
Cited Times:10[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/25642
Collection复杂系统管理与控制国家重点实验室_机器人理论与应用
Corresponding AuthorLiu, Zhiyong
Affiliation1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Comp & Control, Beijing 100049, Peoples R China
3.Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
4.CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai 200031, Peoples R China
5.Chinese Acad Sci, Cloud Comp Ctr, Dongguan 523808, Peoples R China
6.Xian Jiaotong Liverpool Univ, Dept Elect & Elect Engn, Suzhou 215123, Peoples R China
7.Univ Stirling, Sch Nat Sci, Div Comp Sci & Maths, Stirling FK9 4LA, Scotland
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Xiong, Fangzhou,Sun, Biao,Yang, Xu,et al. Guided Policy Search for Sequential Multitask Learning[J]. IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS,2019,49(1):216-226.
APA Xiong, Fangzhou.,Sun, Biao.,Yang, Xu.,Qiao, Hong.,Huang, Kaizhu.,...&Liu, Zhiyong.(2019).Guided Policy Search for Sequential Multitask Learning.IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS,49(1),216-226.
MLA Xiong, Fangzhou,et al."Guided Policy Search for Sequential Multitask Learning".IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS 49.1(2019):216-226.
Files in This Item: Download All
File Name/Size DocType Version Access License
Guided Policy Search(1059KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xiong, Fangzhou]'s Articles
[Sun, Biao]'s Articles
[Yang, Xu]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xiong, Fangzhou]'s Articles
[Sun, Biao]'s Articles
[Yang, Xu]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xiong, Fangzhou]'s Articles
[Sun, Biao]'s Articles
[Yang, Xu]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Guided Policy Search for Sequential Multitask Learning.pdf
Format: Adobe PDF
This file does not support browsing at this time
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.