Improving Visual Grounding With Visual-Linguistic Verification and Iterative Reasoning
Li Yang1,2; Yan Xu3; Chunfeng Yuan1; Wei Liu1; Bing Li1; Weiming Hu1,2,4
2022-06
会议名称IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
会议日期2022-6
会议地点New Orleans, Louisiana
出版者Institute of Electrical and Electronics Engineers (IEEE)
摘要

Visual grounding is a task to locate the target indicated by a natural language expression. Existing methods extend the generic object detection framework to this problem. They base the visual grounding on the features from pre-generated proposals or anchors, and fuse these features with the text embeddings to locate the target mentioned by the text. However, modeling the visual features from these predefined locations may fail to fully exploit the visual context and attribute information in the text query, which limits their performance. In this paper, we propose a transformer-based framework for accurate visual grounding by establishing text-conditioned discriminative features and performing multi-stage cross-modal reasoning. Specifically, we develop a visual-linguistic verification module to focus the visual features on regions relevant to the textual descriptions while suppressing the unrelated areas. A language-guided feature encoder is also devised to aggregate the visual contexts of the target object to improve the object's distinctiveness. To retrieve the target from the encoded visual features, we further propose a multi-stage cross-modal decoder to iteratively speculate on the correlations between the image and text for accurate target localization. Extensive experiments on five widely used datasets validate the efficacy of our proposed components and demonstrate state-of-the-art performance.

七大方向——子方向分类目标检测、跟踪与识别
国重实验室规划方向分类多模态协同认知
是否有论文关联数据集需要存交
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/52140
专题多模态人工智能系统全国重点实验室_视频内容安全
通讯作者Chunfeng Yuan
作者单位1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.The Chinese University of Hong Kong
4.CAS Center for Excellence in Brain Science and Intelligence Technology
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Li Yang,Yan Xu,Chunfeng Yuan,et al. Improving Visual Grounding With Visual-Linguistic Verification and Iterative Reasoning[C]:Institute of Electrical and Electronics Engineers (IEEE),2022.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Yang_Improving_Visua(2060KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li Yang]的文章
[Yan Xu]的文章
[Chunfeng Yuan]的文章
百度学术
百度学术中相似的文章
[Li Yang]的文章
[Yan Xu]的文章
[Chunfeng Yuan]的文章
必应学术
必应学术中相似的文章
[Li Yang]的文章
[Yan Xu]的文章
[Chunfeng Yuan]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Yang_Improving_Visual_Grounding_With_Visual-Linguistic_Verification_and_Iterative_Reasoning_CVPR_2022_paper.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。