CASIA OpenIR  > 类脑智能研究中心  > 神经计算及脑机交互
Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data
Du Changde1,2; Du Changying3; Wang Hao3; Li Jinpeng1,2; Zheng Wei-Long4; Lu Bao-Liang4; He Huiguang1,2,5
2018
Conference NameACM Multimedia Conference
Conference DateOctober 22--26, 2018
Conference PlaceSeoul, Republic of Korea
Abstract

There are threefold challenges in emotion recognition. First, it is difficult to recognize human's emotional states only considering a single modality.  Second, it is expensive to manually annotate the emotional data. Third, emotional data often suffers from missing modalities due to unforeseeable sensor malfunction or configuration issues.  In this paper, we address all these problems under a novel multi-view deep generative framework. Specifically, we propose to model the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space. By imposing a Gaussian mixture assumption on the posterior approximation of the shared latent variables, our framework can learn the joint deep representation from multiple modalities and evaluate the importance of each modality simultaneously. To solve the labeled-data-scarcity problem, we extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task.  To address the missing-modality problem, we further extend our semi-supervised multi-view model to deal with incomplete data, where a missing view is treated as a latent variable and integrated out during inference. This way, the proposed overall framework can utilize all available (both labeled and unlabeled, as well as both complete and incomplete) data to improve its generalization ability. The experiments conducted on two real multi-modal emotion datasets demonstrated the superiority of our framework.

Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/23626
Collection类脑智能研究中心_神经计算及脑机交互
Corresponding AuthorHe Huiguang
Affiliation1.Research Center for Brain-Inspired Intelligence \& NLPR, CASIA, Beijing 100190, China
2.University of Chinese Academy of Sciences, Beijing 100049, China
3.360 Search Lab, Beijing, China
4.Department of Computer Science and Engineering, SJTU, Shanghai, Beijing
5.Center for Excellence in Brain Science and Intelligence Technology, CAS, Beijing, China
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Du Changde,Du Changying,Wang Hao,et al. Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data[C],2018.
Files in This Item: Download All
File Name/Size DocType Version Access License
ACMMM_2018_Semi-supe(1217KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Du Changde]'s Articles
[Du Changying]'s Articles
[Wang Hao]'s Articles
Baidu academic
Similar articles in Baidu academic
[Du Changde]'s Articles
[Du Changying]'s Articles
[Wang Hao]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Du Changde]'s Articles
[Du Changying]'s Articles
[Wang Hao]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: ACMMM_2018_Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.