Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-life Media Challenge and Workshop
会议日期
12-16 October, 2020
会议地点
Seattle, United States
摘要
Automatic perception and understanding of human emotion or
sentiment has a wide range of applications and has attracted increasing attention nowadays. The Multimodal Sentiment Analysis
in Real-life Media (MuSe) 2020 provides a testing bed for recognizing human emotion or sentiment from multiple modalities (audio,
video, and text) in the wild scenario. In this paper, we present our
solutions to the MuSe-Wild sub-challenge of MuSe 2020. The goal
of this sub-challenge is to perform continuous emotion (arousal
and valence) predictions on a car review database, Muse-CaR. To
this end, we frst extract both handcrafted features and deep representations from multiple modalities. Then, we utilize the Long
Short-Term Memory (LSTM) recurrent neural network as well as
the self-attention mechanism to model the complex temporal dependencies in the sequence. The Concordance Correlation Coefcient
(CCC) loss is employed to guide the model to learn local variations
and the global trend of emotion simultaneously. Finally, two fusion
strategies, early fusion and late fusion, are adopted to further boost
the model’s performance by exploiting complementary information
from different modalities. Our proposed method achieves CCC of
0.4726 and 0.5996 for arousal and valence respectively on the test
set, which outperforms the baseline system with corresponding
CCC of 0.2834 and 0.2431.
1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2.School of Artifcial Intelligence, University of Chinese Academy of Sciences 3.CAS Center for Excellence in Brain Science and Intelligence Technology Beijing, China
修改评论