CASIA OpenIR  > 复杂系统认知与决策实验室
MoDE-CoTD: Chain-of-Thought Distillation for Complex Reasoning Tasks with Mixture of Decoupled LoRA-Experts
Xiang Li1,2; Shizhu He1,2; Jiayu Wu1,2; Zhao Yang1,2; Yao Xu1,2; Yang Jun3; Haifeng Liu3; Kang Liu1,2; Jun Zhao1,2
2024-05-20
Conference NameThe 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Conference Date2024.5.20 - 2024.5.25
Conference PlaceTorino (Italia)
PublisherELRA and ICCL
Abstract

Chain-of-thought Distillation (CoTD) aims at distilling Chain-of-thought (CoT) reasoning ability of large language models (LLMs) to much smaller student models. The core of CoTD is using a large teacher model to generate rationales and fine-tune smaller student models. However, current Chain-of-thought Distillation works have the following limitations: 1) Student models are separately distilled from specific reasoning tasks and lack a collaboration mechanism, hindering the enhancement of reasoning performance through collaboration among various reasoning tasks. 2) The parameter update of student models severely harms the CoT reasoning ability on other unseen reasoning tasks not included in the distillation process. In this work, we introduce a novel CoT Distillation method, MoDE-CoTD, which decouples the CoT reasoning abilities out of the student model by distilling multiple LoRA-Experts and freezing the parameters of the student model. Sequentially, LoRA-Experts are combined and adapted to handle both seen and unseen reasoning tasks, enabling collaboration among diverse reasoning tasks to further enhance CoT reasoning performance. Experimental results on 14 datasets (including 4 unseen datasets) demonstrate the strength of MoDE-CoTD, with an average accuracy gain of 6.3% on seen datasets and 7.8% on unseen datasets.

Sub direction classification自然语言处理
planning direction of the national heavy laboratory语音语言处理
Paper associated data
Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/57448
Collection复杂系统认知与决策实验室
Corresponding AuthorShizhu He
Affiliation1.Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.Guangdong OPPO Mobile Telecommunications Corp.,Ltd
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Xiang Li,Shizhu He,Jiayu Wu,et al. MoDE-CoTD: Chain-of-Thought Distillation for Complex Reasoning Tasks with Mixture of Decoupled LoRA-Experts[C]:ELRA and ICCL,2024.
Files in This Item: Download All
File Name/Size DocType Version Access License
3178_Paper.pdf(1062KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xiang Li]'s Articles
[Shizhu He]'s Articles
[Jiayu Wu]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xiang Li]'s Articles
[Shizhu He]'s Articles
[Jiayu Wu]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xiang Li]'s Articles
[Shizhu He]'s Articles
[Jiayu Wu]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: 3178_Paper.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.