CASIA OpenIR  > 模式识别国家重点实验室  > 自然语言处理
Associative Multichannel Autoencoder for Multimodal Word Representation
Wang, Shaonan; Zhang, Jiajun; Zong, Chengqing
2018
会议名称EMNLP
会议日期2018-11
会议地点Brussels, Belgium
摘要

In this paper we address the problem of learning multimodal word representations by integrating textual, visual and auditory inputs. Inspired by the re-constructive and associative nature of human memory, we propose a novel associative multichannel autoencoder (AMA). Our model first learns the associations between textual and perceptual modalities, so as to predict the missing perceptual information of concepts. Then the textual and predicted perceptual representations are fused through reconstructing their original and associated embeddings. Using a gating mechanism our model assigns different weights to each modality according to the different concepts. Results on six benchmark concept similarity tests show that the proposed method significantly outperforms strong unimodal baselines and state-of-the-art multimodal models

语种英语
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/23198
专题模式识别国家重点实验室_自然语言处理
作者单位中国科学院自动化研究所
第一作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Wang, Shaonan,Zhang, Jiajun,Zong, Chengqing. Associative Multichannel Autoencoder for Multimodal Word Representation[C],2018.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang, Shaonan]的文章
[Zhang, Jiajun]的文章
[Zong, Chengqing]的文章
百度学术
百度学术中相似的文章
[Wang, Shaonan]的文章
[Zhang, Jiajun]的文章
[Zong, Chengqing]的文章
必应学术
必应学术中相似的文章
[Wang, Shaonan]的文章
[Zhang, Jiajun]的文章
[Zong, Chengqing]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。