Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models
Ma, Chengcheng1,2; Liu, Yang3; Deng, Jiankang4; Xie, Lingxi4; Dong, Weiming1; Xu, Changsheng1
发表期刊IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
ISSN1051-8215
2023-09-01
卷号33期号:9页码:4616-4629
摘要

Pretrained vision-language models (VLMs) such as CLIP have shown impressive generalization capability in downstream vision tasks with appropriate text prompts. Instead of designing prompts manually, Context Optimization (CoOp) has been recently proposed to learn continuous prompts using task-specific training data. Despite the performance improvements on downstream tasks, several studies have reported that CoOp suffers from the overfitting issue in two aspects: (i) the test accuracy on base classes first improves and then worsens during training; (ii) the test accuracy on novel classes keeps decreasing. However, none of the existing studies can understand and mitigate such overfitting problems. In this study, we first explore the cause of overfitting by analyzing the gradient flow. Comparative experiments reveal that CoOp favors generalizable and spurious features in the early and later training stages, respectively, leading to the non-overfitting and overfitting phenomena. Given those observations, we propose Subspace Prompt Tuning (Sub PT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process and successfully eliminate the overfitting problem. In addition, we equip CoOp with a Novel Feature Learner (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set, needless of image training data. Extensive experiments on 11 classification datasets demonstrate that Sub PT+NFL consistently boost the performance of CoOp and outperform the state-of-the-art CoCoOp approach. Experiments on more challenging vision downstream tasks, including open-vocabulary object detection and zero-shot semantic segmentation, also verify the effectiveness of the proposed method. Codes can be found at https://tinyurl.com/mpe64f89.

关键词Vision-language model prompt tuning over-fitting subspace learning gradient projection
DOI10.1109/TCSVT.2023.3245584
收录类别SCI
语种英语
资助项目National Science Foundation of China[U20B2070] ; National Science Foundation of China[61832016] ; Beijing Natural Science Foundation[L221013]
项目资助者National Science Foundation of China ; Beijing Natural Science Foundation
WOS研究方向Engineering
WOS类目Engineering, Electrical & Electronic
WOS记录号WOS:001063316800016
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类多模态智能
国重实验室规划方向分类多模态协同认知
是否有论文关联数据集需要存交
引用统计
被引频次:1[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/53117
专题多模态人工智能系统全国重点实验室
通讯作者Dong, Weiming
作者单位1.Chinese Academy of Sciences, Institution of Automation, National Lab Pattern Recognition, Beijing 100190, China
2.University of Chinese Academy of Sciences, School of Artificial Intelligence, Beijing 100049, China
3.Alibaba DAMO Academy, Hangzhou 310024, China
4.Huawei, Shenzhen 518129, China
推荐引用方式
GB/T 7714
Ma, Chengcheng,Liu, Yang,Deng, Jiankang,et al. Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2023,33(9):4616-4629.
APA Ma, Chengcheng,Liu, Yang,Deng, Jiankang,Xie, Lingxi,Dong, Weiming,&Xu, Changsheng.(2023).Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,33(9),4616-4629.
MLA Ma, Chengcheng,et al."Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33.9(2023):4616-4629.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Understanding_and_Mi(1644KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Ma, Chengcheng]的文章
[Liu, Yang]的文章
[Deng, Jiankang]的文章
百度学术
百度学术中相似的文章
[Ma, Chengcheng]的文章
[Liu, Yang]的文章
[Deng, Jiankang]的文章
必应学术
必应学术中相似的文章
[Ma, Chengcheng]的文章
[Liu, Yang]的文章
[Deng, Jiankang]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Understanding_and_Mitigating_Overfitting_in_Prompt_Tuning_for_Vision-Language_Models.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。