Knowledge-driven Egocentric Multimodal Activity Recognition
Huang, Yi1,2,3; Yang, Xiaoshan1,2,3; Gao, Junyu1,2,3; Sang, Jitao3,4,5; Xu, Changsheng1,2,3
发表期刊ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
ISSN1551-6857
2020-12-01
卷号16期号:4页码:21
摘要

Recognizing activities from egocentric multimodal data collected by wearable cameras and sensors, is gaining interest, as multimodal methods always benefit from the complementarity of different modalities. However, since high-dimensional videos contain rich high-level semantic information while low-dimensional sensor signals describe simple motion patterns of the wearer, the large modality gap between the videos and the sensor signals raises a challenge for fusing the raw data. Moreover, the lack of large-scale egocentric multimodal datasets due to the cost of data collection and annotation processes makes another challenge for employing complex deep learning models. To jointly deal with the above two challenges, we propose a knowledge-driven multimodal activity recognition framework that exploits external knowledge to fuse multimodal data and reduce the dependence on large-scale training samples. Specifically, we design a dual-GCLSTM (Graph Convolutional LSTM) and a multi-layer GCN (Graph Convolutional Network) to collectively model the relations among activities and intermediate objects. The dual-GCLSTM is designed to fuse temporal multimodal features with top-down relation-aware guidance. In addition, we apply a co-attention mechanism to adaptively attend to the features of different modalities at different timesteps. The multi-layer GCN aims to learn relation-aware classifiers of activity categories. Experimental results on three publicly available egocentric multimodal datasets show the effectiveness of the proposed model.

关键词Egocentric videos wearable sensors graph neural networks
DOI10.1145/3409332
关键词[WOS]1ST-PERSON VISION ; VIDEOS
收录类别SCI
语种英语
资助项目National Key Research and Development Program of China[2018AAA0100604] ; National Natural Science Foundation of China[61720106006] ; National Natural Science Foundation of China[62072455] ; National Natural Science Foundation of China[61702511] ; National Natural Science Foundation of China[61751211] ; National Natural Science Foundation of China[61620106003] ; National Natural Science Foundation of China[61532009] ; National Natural Science Foundation of China[U1836220] ; National Natural Science Foundation of China[U1705262] ; National Natural Science Foundation of China[61872424] ; Key Research Program of Frontier Sciences of CAS[QYZDJSSWJSC039] ; Research Program of National Laboratory of Pattern Recognition[Z-2018007]
项目资助者National Key Research and Development Program of China ; National Natural Science Foundation of China ; Key Research Program of Frontier Sciences of CAS ; Research Program of National Laboratory of Pattern Recognition
WOS研究方向Computer Science
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering ; Computer Science, Theory & Methods
WOS记录号WOS:000614096700017
出版者ASSOC COMPUTING MACHINERY
七大方向——子方向分类多模态智能
国重实验室规划方向分类多模态协同认知
是否有论文关联数据集需要存交
引用统计
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/42874
专题多模态人工智能系统全国重点实验室_多媒体计算
通讯作者Huang, Yi
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, 95 Zhongguancun East Rd, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, 95 Zhongguancun East Rd, Beijing, Peoples R China
3.Peng Cheng Lab, Shenzhen, Peoples R China
4.Beijing Jiaotong Univ, Sch Comp & Informat Technol, 506 Room 9 Teaching Bldg, Beijing, Peoples R China
5.Beijing Jiaotong Univ, Beijing Key Lab Traff Data Anal & Min, 506 Room 9 Teaching Bldg, Beijing, Peoples R China
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Huang, Yi,Yang, Xiaoshan,Gao, Junyu,et al. Knowledge-driven Egocentric Multimodal Activity Recognition[J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS,2020,16(4):21.
APA Huang, Yi,Yang, Xiaoshan,Gao, Junyu,Sang, Jitao,&Xu, Changsheng.(2020).Knowledge-driven Egocentric Multimodal Activity Recognition.ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS,16(4),21.
MLA Huang, Yi,et al."Knowledge-driven Egocentric Multimodal Activity Recognition".ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS 16.4(2020):21.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Knowledge-driven Ego(1875KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Huang, Yi]的文章
[Yang, Xiaoshan]的文章
[Gao, Junyu]的文章
百度学术
百度学术中相似的文章
[Huang, Yi]的文章
[Yang, Xiaoshan]的文章
[Gao, Junyu]的文章
必应学术
必应学术中相似的文章
[Huang, Yi]的文章
[Yang, Xiaoshan]的文章
[Gao, Junyu]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Knowledge-driven Egocentric Multimodal Activity Recognition.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。