Knowledge Commons of Institute of Automation,CAS
A novel transformer autoencoder for multi-modal emotion recognition with incomplete data | |
Cheng, Cheng1; Liu, Wenzhe2; Fan, Zhaoxin3; Feng, Lin1; Jia, Ziyu4 | |
发表期刊 | NEURAL NETWORKS |
ISSN | 0893-6080 |
2024-04-01 | |
卷号 | 172页码:12 |
通讯作者 | Feng, Lin(fenglin@dlut.edu.cn) |
摘要 | Multi -modal signals have become essential data for emotion recognition since they can represent emotions more comprehensively. However, in real -world environments, it is often impossible to acquire complete data on multi -modal signals, and the problem of missing modalities causes severe performance degradation in emotion recognition. Therefore, this paper represents the first attempt to use a transformer -based architecture, aiming to fill the modality -incomplete data from partially observed data for multi -modal emotion recognition (MER). Concretely, this paper proposes a novel unified model called transformer autoencoder (TAE), comprising a modality -specific hybrid transformer encoder, an inter -modality transformer encoder, and a convolutional decoder. The modality -specific hybrid transformer encoder bridges a convolutional encoder and a transformer encoder, allowing the encoder to learn local and global context information within each particular modality. The inter -modality transformer encoder builds and aligns global cross -modal correlations and models longrange contextual information with different modalities. The convolutional decoder decodes the encoding features to produce more precise recognition. Besides, a regularization term is introduced into the convolutional decoder to force the decoder to fully leverage the complete and incomplete data for emotional recognition of missing data. 96.33%, 95.64%, and 92.69% accuracies are attained on the available data of the DEAP and SEED -IV datasets, and 93.25%, 92.23%, and 81.76% accuracies are obtained on the missing data. Particularly, the model acquires a 5.61% advantage with 70% missing data, demonstrating that the model outperforms some state-of-the-art approaches in incomplete multi -modal learning. |
关键词 | Multi-modal signals Emotion recognition Incomplete data Transformer autoencoder Convolutional encoder |
DOI | 10.1016/j.neunet.2024.106111 |
收录类别 | SCI |
语种 | 英语 |
资助项目 | Fundamental Research Funds for the Central Universities, China[DUT19RC (3) 012] ; National Natural Science Foundation of China[62306317] ; China Postdoctoral Science Foundation, China[GZC20232992] ; China Postdoctoral Science Foundation, China[2023M733738] |
项目资助者 | Fundamental Research Funds for the Central Universities, China ; National Natural Science Foundation of China ; China Postdoctoral Science Foundation, China |
WOS研究方向 | Computer Science ; Neurosciences & Neurology |
WOS类目 | Computer Science, Artificial Intelligence ; Neurosciences |
WOS记录号 | WOS:001163939200001 |
出版者 | PERGAMON-ELSEVIER SCIENCE LTD |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/55674 |
专题 | 脑图谱与类脑智能实验室 |
通讯作者 | Feng, Lin |
作者单位 | 1.Dalian Univ Technol, Dept Comp Sci & Technol, Dalian, Peoples R China 2.Huzhou Univ, Sch Informat Engn, Huzhou, Peoples R China 3.Renmin Univ China, Psyche AI Inc, Beijing, Peoples R China 4.Univ Chinese Acad Sci, Inst Automat, Chinese Acad Sci, Brainnetome Ctr, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Cheng, Cheng,Liu, Wenzhe,Fan, Zhaoxin,et al. A novel transformer autoencoder for multi-modal emotion recognition with incomplete data[J]. NEURAL NETWORKS,2024,172:12. |
APA | Cheng, Cheng,Liu, Wenzhe,Fan, Zhaoxin,Feng, Lin,&Jia, Ziyu.(2024).A novel transformer autoencoder for multi-modal emotion recognition with incomplete data.NEURAL NETWORKS,172,12. |
MLA | Cheng, Cheng,et al."A novel transformer autoencoder for multi-modal emotion recognition with incomplete data".NEURAL NETWORKS 172(2024):12. |
条目包含的文件 | 条目无相关文件。 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论