CASIA OpenIR  > 综合信息系统研究中心  > 视知觉融合及其应用
Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning
Qiu JY(邱俊彦)1,2; Haidong Zhang2; Yiping Yang2
Source PublicationConnection Science
2023
Volume35Issue:1Pages:2174078
Abstract

Formulating dialogue policy as a reinforcement learning (RL) task enables a dialogue system to act optimally by interacting with humans. However, typical RLbased methods normally suffer from challenges such as sparse and delayed reward problems. Besides, with user goal unavailable in real scenarios, the reward estimator is unable to generate reward reflecting action validity and task completion. Those issues may slow down and degrade the policy learning significantly. In this paper, we present a novel scheduled knowledge distillation framework for dialogue policy learning, which trains a compact student reward estimator by distilling the prior knowledge of user goals from a large teacher model. To further improve the stability of dialogue policy learning, we propose to leverage self-paced learning to arrange meaningful training order for the student reward estimator. Comprehensive experiments on Microsoft Dialogue Challenge and MultiWOZ datasets indicate that our approach significantly accelerates the learning speed, and the task-completion success rate can be improved from 0.47%∼9.01% compared with several strong baselines.

Keywordreinforcement learning dialogue policy learning curriculum learning knowledge distillation
Indexed BySCI
Language英语
IS Representative Paper
Sub direction classification自然语言处理
planning direction of the national heavy laboratory其他
Paper associated data
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/56657
Collection综合信息系统研究中心_视知觉融合及其应用
Corresponding AuthorQiu JY(邱俊彦)
Affiliation1.University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Qiu JY,Haidong Zhang,Yiping Yang. Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning[J]. Connection Science,2023,35(1):2174078.
APA Qiu JY,Haidong Zhang,&Yiping Yang.(2023).Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning.Connection Science,35(1),2174078.
MLA Qiu JY,et al."Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning".Connection Science 35.1(2023):2174078.
Files in This Item:
File Name/Size DocType Version Access License
SKD.pdf(831KB)期刊论文作者接受稿开放获取CC BY-NC-SAView
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Qiu JY(邱俊彦)]'s Articles
[Haidong Zhang]'s Articles
[Yiping Yang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Qiu JY(邱俊彦)]'s Articles
[Haidong Zhang]'s Articles
[Yiping Yang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Qiu JY(邱俊彦)]'s Articles
[Haidong Zhang]'s Articles
[Yiping Yang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: SKD.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.