Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning
Xiang, Lu1,2; Zhao, Yang1,2; Zhu, Junnan1,2; Zhou, Yu1,2,3; Zong, Chengqing1,2
发表期刊KNOWLEDGE-BASED SYSTEMS
ISSN0950-7051
2023-01-10
卷号259页码:14
通讯作者Zhou, Yu(yzhou@nlpr.ia.ac.cn)
摘要Dialogue state tracking (DST), a crucial component of the task-oriented dialogue system (TOD), is designed to track the user's goal. Existing DST models mainly focus on monolingual dialogue input, failing to meet the growing needs of a TOD to provide multilingual services. Therefore, this paper proposes a novel Zero-shot Language Extension scenario for DST, extending the monolingual DST to multilingual DST without extra high-cost dialogue data annotation. In this scenario, the multilingual DST only needs a single shared model to handle multilingual input and generate a unified dialogue state. This setting makes deploying a complete multilingual TOD easy since it could be reused by the downstream components from existing monolingual TOD. Specifically, we achieve the language extension by multi-auxiliary-tasks fine-tuning of multilingual pre-trained models, where five relevant auxiliary tasks are jointly designed, including monolingual DST, cross-lingual DST, forward word translation, utterance recovery, and semantic similarity. The extended multilingual DST model can be enhanced through joint optimization with all the auxiliary tasks by capturing multilingual context understanding and cross-lingual alignment characteristics. Comprehensive experiments on Multilingual WOZ dataset (English -> German and English -> Italian) and cross-lingual MultiWOZ dataset (English -> Chinese and Chinese -> English) demonstrate the effectiveness and superiority of the proposed method.(c) 2022 Elsevier B.V. All rights reserved.
关键词Dialogue state tracking Zero -shot language extension Multilingual DST Pre -trained models Multi -auxiliary -tasks fine-tuning
DOI10.1016/j.knosys.2022.110015
收录类别SCI
语种英语
资助项目National Key R&D Program of China ; [2020AAA0108600]
项目资助者National Key R&D Program of China
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000883002100001
出版者ELSEVIER
引用统计
被引频次:2[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/51249
专题多模态人工智能系统全国重点实验室
通讯作者Zhou, Yu
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
3.Zhongke Fanyu Technol Co Ltd, Fanyu AI Lab, Beijing, Peoples R China
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Xiang, Lu,Zhao, Yang,Zhu, Junnan,et al. Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning[J]. KNOWLEDGE-BASED SYSTEMS,2023,259:14.
APA Xiang, Lu,Zhao, Yang,Zhu, Junnan,Zhou, Yu,&Zong, Chengqing.(2023).Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning.KNOWLEDGE-BASED SYSTEMS,259,14.
MLA Xiang, Lu,et al."Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning".KNOWLEDGE-BASED SYSTEMS 259(2023):14.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xiang, Lu]的文章
[Zhao, Yang]的文章
[Zhu, Junnan]的文章
百度学术
百度学术中相似的文章
[Xiang, Lu]的文章
[Zhao, Yang]的文章
[Zhu, Junnan]的文章
必应学术
必应学术中相似的文章
[Xiang, Lu]的文章
[Zhao, Yang]的文章
[Zhu, Junnan]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。