Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning
Qiu JY(邱俊彦)1,2; Haidong Zhang2; Yiping Yang2
发表期刊Connection Science
2023
卷号35期号:1页码:2174078
摘要

Formulating dialogue policy as a reinforcement learning (RL) task enables a dialogue system to act optimally by interacting with humans. However, typical RLbased methods normally suffer from challenges such as sparse and delayed reward problems. Besides, with user goal unavailable in real scenarios, the reward estimator is unable to generate reward reflecting action validity and task completion. Those issues may slow down and degrade the policy learning significantly. In this paper, we present a novel scheduled knowledge distillation framework for dialogue policy learning, which trains a compact student reward estimator by distilling the prior knowledge of user goals from a large teacher model. To further improve the stability of dialogue policy learning, we propose to leverage self-paced learning to arrange meaningful training order for the student reward estimator. Comprehensive experiments on Microsoft Dialogue Challenge and MultiWOZ datasets indicate that our approach significantly accelerates the learning speed, and the task-completion success rate can be improved from 0.47%∼9.01% compared with several strong baselines.

关键词reinforcement learning dialogue policy learning curriculum learning knowledge distillation
收录类别SCI
语种英语
是否为代表性论文
七大方向——子方向分类自然语言处理
国重实验室规划方向分类其他
是否有论文关联数据集需要存交
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/56657
专题综合信息系统研究中心_视知觉融合及其应用
通讯作者Qiu JY(邱俊彦)
作者单位1.University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Qiu JY,Haidong Zhang,Yiping Yang. Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning[J]. Connection Science,2023,35(1):2174078.
APA Qiu JY,Haidong Zhang,&Yiping Yang.(2023).Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning.Connection Science,35(1),2174078.
MLA Qiu JY,et al."Reward Estimation with Scheduled Knowledge Distillation for Dialogue Policy Learning".Connection Science 35.1(2023):2174078.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
SKD.pdf(831KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Qiu JY(邱俊彦)]的文章
[Haidong Zhang]的文章
[Yiping Yang]的文章
百度学术
百度学术中相似的文章
[Qiu JY(邱俊彦)]的文章
[Haidong Zhang]的文章
[Yiping Yang]的文章
必应学术
必应学术中相似的文章
[Qiu JY(邱俊彦)]的文章
[Haidong Zhang]的文章
[Yiping Yang]的文章
相关权益政策
暂无数据
收藏/分享
文件名: SKD.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。