Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features
Du CD(杜长德)1; Fu KC(付铠成)1; Li JP(李劲鹏)2; He HG(何晖光)1
发表期刊IEEE Transactions on Pattern Analysis and Machine Intelligence
2023
页码1-17
摘要

Decoding human visual neural representations is a challenging task with great scientific significance in revealing vision-processing mechanisms and developing brain-like intelligent machines. Most existing methods are difficult to generalize to novel categories that have no corresponding neural data for training. The two main reasons are 1) the under-exploitation of the multimodal semantic knowledge underlying the neural data and 2) the small number of paired ( stimuli-responses ) training data. To overcome these limitations, this paper presents a generic neural decoding method called BraVL that uses multimodal learning of brain-visual-linguistic features. We focus on modeling the relationships between brain, visual and linguistic features via multimodal deep generative models. Specifically, we leverage the mixture-of-product-of-experts formulation to infer a latent code that enables a coherent joint generation of all three modalities. To learn a more consistent joint representation and improve the data efficiency in the case of limited brain activity data, we exploit both intra- and inter-modality mutual information maximization regularization terms. In particular, our BraVL model can be trained under various semi-supervised scenarios to incorporate the visual and textual features obtained from the extra categories. Finally, we construct three trimodal matching datasets, and the extensive experiments lead to some interesting conclusions and cognitive insights: 1) decoding novel visual categories from human brain activity is practically possible with good accuracy; 2) decoding models using the combination of visual and linguistic features perform much better than those using either of them alone; 3) visual perception may be accompanied by linguistic influences to represent the semantics of visual stimuli.

URL查看原文
收录类别SCI
语种英语
是否为代表性论文
七大方向——子方向分类脑机接口
国重实验室规划方向分类认知机理与类脑学习
是否有论文关联数据集需要存交
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/51626
专题脑图谱与类脑智能实验室_神经计算与脑机交互
通讯作者He HG(何晖光)
作者单位1.Institute of Automation,Chinese Academy of Sciences
2.Ningbo HwaMei Hospital, UCAS
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Du CD,Fu KC,Li JP,et al. Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2023:1-17.
APA Du CD,Fu KC,Li JP,&He HG.(2023).Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features.IEEE Transactions on Pattern Analysis and Machine Intelligence,1-17.
MLA Du CD,et al."Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features".IEEE Transactions on Pattern Analysis and Machine Intelligence (2023):1-17.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
TPAMI2023_Decoding_V(4669KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Du CD(杜长德)]的文章
[Fu KC(付铠成)]的文章
[Li JP(李劲鹏)]的文章
百度学术
百度学术中相似的文章
[Du CD(杜长德)]的文章
[Fu KC(付铠成)]的文章
[Li JP(李劲鹏)]的文章
必应学术
必应学术中相似的文章
[Du CD(杜长德)]的文章
[Fu KC(付铠成)]的文章
[Li JP(李劲鹏)]的文章
相关权益政策
暂无数据
收藏/分享
文件名: TPAMI2023_Decoding_Visual_Neural_Representations_by_Multimodal_Learning_of_Brain-Visual-Linguistic_Features.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。