Knowledge Commons of Institute of Automation,CAS
Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models | |
Ma, Chengcheng1,2![]() ![]() ![]() | |
发表期刊 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
![]() |
ISSN | 1051-8215 |
2023-09-01 | |
卷号 | 33期号:9页码:4616-4629 |
摘要 | Pretrained vision-language models (VLMs) such as CLIP have shown impressive generalization capability in downstream vision tasks with appropriate text prompts. Instead of designing prompts manually, Context Optimization (CoOp) has been recently proposed to learn continuous prompts using task-specific training data. Despite the performance improvements on downstream tasks, several studies have reported that CoOp suffers from the overfitting issue in two aspects: (i) the test accuracy on base classes first improves and then worsens during training; (ii) the test accuracy on novel classes keeps decreasing. However, none of the existing studies can understand and mitigate such overfitting problems. In this study, we first explore the cause of overfitting by analyzing the gradient flow. Comparative experiments reveal that CoOp favors generalizable and spurious features in the early and later training stages, respectively, leading to the non-overfitting and overfitting phenomena. Given those observations, we propose Subspace Prompt Tuning (Sub PT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process and successfully eliminate the overfitting problem. In addition, we equip CoOp with a Novel Feature Learner (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set, needless of image training data. Extensive experiments on 11 classification datasets demonstrate that Sub PT+NFL consistently boost the performance of CoOp and outperform the state-of-the-art CoCoOp approach. Experiments on more challenging vision downstream tasks, including open-vocabulary object detection and zero-shot semantic segmentation, also verify the effectiveness of the proposed method. Codes can be found at https://tinyurl.com/mpe64f89. |
关键词 | Vision-language model prompt tuning over-fitting subspace learning gradient projection |
DOI | 10.1109/TCSVT.2023.3245584 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | National Science Foundation of China[U20B2070] ; National Science Foundation of China[61832016] ; Beijing Natural Science Foundation[L221013] |
项目资助者 | National Science Foundation of China ; Beijing Natural Science Foundation |
WOS研究方向 | Engineering |
WOS类目 | Engineering, Electrical & Electronic |
WOS记录号 | WOS:001063316800016 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
七大方向——子方向分类 | 多模态智能 |
国重实验室规划方向分类 | 多模态协同认知 |
是否有论文关联数据集需要存交 | 否 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/53117 |
专题 | 多模态人工智能系统全国重点实验室 |
通讯作者 | Dong, Weiming |
作者单位 | 1.Chinese Academy of Sciences, Institution of Automation, National Lab Pattern Recognition, Beijing 100190, China 2.University of Chinese Academy of Sciences, School of Artificial Intelligence, Beijing 100049, China 3.Alibaba DAMO Academy, Hangzhou 310024, China 4.Huawei, Shenzhen 518129, China |
推荐引用方式 GB/T 7714 | Ma, Chengcheng,Liu, Yang,Deng, Jiankang,et al. Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2023,33(9):4616-4629. |
APA | Ma, Chengcheng,Liu, Yang,Deng, Jiankang,Xie, Lingxi,Dong, Weiming,&Xu, Changsheng.(2023).Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,33(9),4616-4629. |
MLA | Ma, Chengcheng,et al."Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33.9(2023):4616-4629. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Understanding_and_Mi(1644KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论