CASIA OpenIR  > 复杂系统认知与决策实验室  > 先进机器人
Cross-Modality Synergy Network for Referring Expression Comprehension and Segmentation
Li, Qianzhong1,2; Zhang, Yujia1; Sun, Shiying1; Wu, Jinting1,2; Zhao, Xiaoguang1; Tan, Min1
发表期刊Neurocomputing
ISSN0925-2312
2022-01-07
卷号467期号:/页码:99-114
摘要

Referring expression comprehension and segmentation aim to locate and segment a referred instance in an image according to a natural language expression. However, existing methods tend to ignore the interaction between visual and language modalities for visual feature learning, and establishing a synergy between the visual and language modalities remains a considerable challenge. To tackle the above problems, we propose a novel end-to-end framework, Cross-Modality Synergy Network (CMS-Net), to address the two tasks jointly. In this work, we propose an attention-aware representation learning module to learn modal representations for both images and expressions. A language self-attention submodule is proposed in this module to learn expression representations by leveraging the intra-modality relations, and a language-guided channel-spatial attention submodule is introduced to obtain the language aware visual representations under language guidance, which helps the model pay more attention to the referent-relevant regions in the images and relieve background interference. Then, we design a cross-modality synergy module to establish the inter-modality relations for modality fusion. Specifically, a language-visual similarity is obtained at each position of the visual feature map, and the synergy is achieved between the two modalities in both semantic and spatial dimensions. Furthermore, we propose a multi-scale feature fusion module with a selective strategy to aggregate the important information from multi-scale features, yielding target results. We conduct extensive experiments on four challenging benchmarks, and our framework achieves significant performance gains over state-of-the-art methods.

关键词Referring expression comprehension Referring expression segmentation Cross-modality synergy Attention mechanism
DOI10.1016/j.neucom.2021.09.066
收录类别SCI
语种英语
资助项目National Key Research and Development Project of China[2019YFB1310601] ; National Key R&D Program of China[2017YFC0820203-03] ; National Natural Science Foundation of China[62103410]
项目资助者National Key Research and Development Project of China ; National Key R&D Program of China ; National Natural Science Foundation of China
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000710121100009
出版者ELSEVIER
七大方向——子方向分类多模态智能
引用统计
被引频次:11[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/46310
专题复杂系统认知与决策实验室_先进机器人
通讯作者Zhang, Yujia
作者单位1.The State Key Laboratory of Management and Control for Complex System, Institute of Automation, Chinese Academy of Sciences, Beijing, China
2.chool of Artificial Intelligences, University of Chinese Academy of Sciences, Beijing, China
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Li, Qianzhong,Zhang, Yujia,Sun, Shiying,et al. Cross-Modality Synergy Network for Referring Expression Comprehension and Segmentation[J]. Neurocomputing,2022,467(/):99-114.
APA Li, Qianzhong,Zhang, Yujia,Sun, Shiying,Wu, Jinting,Zhao, Xiaoguang,&Tan, Min.(2022).Cross-Modality Synergy Network for Referring Expression Comprehension and Segmentation.Neurocomputing,467(/),99-114.
MLA Li, Qianzhong,et al."Cross-Modality Synergy Network for Referring Expression Comprehension and Segmentation".Neurocomputing 467./(2022):99-114.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Cross-modality syner(4555KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li, Qianzhong]的文章
[Zhang, Yujia]的文章
[Sun, Shiying]的文章
百度学术
百度学术中相似的文章
[Li, Qianzhong]的文章
[Zhang, Yujia]的文章
[Sun, Shiying]的文章
必应学术
必应学术中相似的文章
[Li, Qianzhong]的文章
[Zhang, Yujia]的文章
[Sun, Shiying]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Cross-modality synergy network for referring expression comprehension and segmentation.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。