CASIA OpenIR  > 复杂系统认知与决策实验室
Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering
Xiang Li1,2; Shizhu HE1,2; Fangyu Lei1,2; Jun Yang3; Tianhuang Su3; Kang Liu1,2,4; Jun Zhao1,2
2024-08-11
会议名称The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)
会议日期2024.08.11-2024.08.16
会议地点Bangkok, Thailand
摘要

Large Language Models (LLMs) can teach small language models (SLMs) to solve complex reasoning tasks (e.g., mathematical ques tion answering) by Chain-of-thought Distillation (CoTD). Specifically, CoTD fine-tunes SLMs by utilizing rationales generated from LLMs such as ChatGPT. However, CoTD has certain limitations that make it unsuitable for knowledge-intensive multi-hop question answering: 1) SLMs have a very limited capacity in memorizing required knowledge compared to LLMs. 2) SLMs do not possess the same powerful integrated abilities in question under standing and knowledge reasoning as LLMs. To address the above limitations, we introduce Decompose-and-Response Distillation (D&R Distillation), which distills two student models, namely Decomposer and Responser separately. The two models solve a knowledge-intensive multi-hop question through an interactive process of asking and answering sub- questions. Our method offers two advantages: 1) SLMs have the capability to access external knowledge to address subquestions, which provides more comprehensive knowledge for multi-hop questions. 2) By employing simpler subquestions instead of complex CoT reasoning, SLMs effectively mitigate task complexity and decrease data prerequisites. Experimental results on three knowledge-intensive multi-hop question answering datasets demonstrate that D&R Distillation can surpass previous CoTD methods, even with much less training data.

七大方向——子方向分类自然语言处理
国重实验室规划方向分类语音语言处理
是否有论文关联数据集需要存交
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/57446
专题复杂系统认知与决策实验室
通讯作者Shizhu HE
作者单位1.Institute of Automation, Chinese Academy of Science
2.2 School of Artificial Intelligence, University of Chinese Academy of Sciences
3.Guangdong OPPO Mobile Telecommunications Corp.,Ltd.
4.Shanghai Artificial Intelligence Laboratory
推荐引用方式
GB/T 7714
Xiang Li,Shizhu HE,Fangyu Lei,et al. Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering[C],2024.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
D_R_Distillation.pdf(873KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xiang Li]的文章
[Shizhu HE]的文章
[Fangyu Lei]的文章
百度学术
百度学术中相似的文章
[Xiang Li]的文章
[Shizhu HE]的文章
[Fangyu Lei]的文章
必应学术
必应学术中相似的文章
[Xiang Li]的文章
[Shizhu HE]的文章
[Fangyu Lei]的文章
相关权益政策
暂无数据
收藏/分享
文件名: D_R_Distillation.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。