CASIA OpenIR  > 脑图谱与类脑智能实验室  > 脑网络组研究
Integrative interaction of emotional speech in audio-visual modality
Dong, Haibin1; Li, Na1; Fan, Lingzhong2; Wei, Jianguo1; Xu, Junhai1
发表期刊FRONTIERS IN NEUROSCIENCE
2022-11-11
卷号16页码:13
通讯作者Xu, Junhai(jhxu@tju.edu.cn)
摘要Emotional clues are always expressed in many ways in our daily life, and the emotional information we receive is often represented by multiple modalities. Successful social interactions require a combination of multisensory cues to accurately determine the emotion of others. The integration mechanism of multimodal emotional information has been widely investigated. Different brain activity measurement methods were used to determine the location of brain regions involved in the audio-visual integration of emotional information, mainly in the bilateral superior temporal regions. However, the methods adopted in these studies are relatively simple, and the materials of the study rarely contain speech information. The integration mechanism of emotional speech in the human brain still needs further examinations. In this paper, a functional magnetic resonance imaging (fMRI) study was conducted using event-related design to explore the audio-visual integration mechanism of emotional speech in the human brain by using dynamic facial expressions and emotional speech to express emotions of different valences. Representational similarity analysis (RSA) based on regions of interest (ROIs), whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis were used to analyze and verify the role of relevant brain regions. Meanwhile, a weighted RSA method was used to evaluate the contributions of each candidate model in the best fitted model of ROIs. The results showed that only the left insula was detected by all methods, suggesting that the left insula played an important role in the audio-visual integration of emotional speech. Whole brain searchlight analysis, modality conjunction analysis and supra-additive analysis together revealed that the bilateral middle temporal gyrus (MTG), right inferior parietal lobule and bilateral precuneus might be involved in the audio-visual integration of emotional speech from other aspects.
关键词audio-visual integration emotional speech fMRI left insula weighted RSA
DOI10.3389/fnins.2022.797277
关键词[WOS]SUPERIOR TEMPORAL SULCUS ; HUMAN BRAIN ; PERCEPTION ; FACE ; INFORMATION ; EXPRESSIONS ; ACTIVATION ; PRECUNEUS ; INSULA ; VOICE
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China ; China Postdoctoral Science Foundation ; Project of Qinghai Science and Technology Program ; [62176181] ; [2020M680905] ; [2022-ZJ-T05]
项目资助者National Natural Science Foundation of China ; China Postdoctoral Science Foundation ; Project of Qinghai Science and Technology Program
WOS研究方向Neurosciences & Neurology
WOS类目Neurosciences
WOS记录号WOS:000890344700001
出版者FRONTIERS MEDIA SA
引用统计
被引频次:2[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/51291
专题脑图谱与类脑智能实验室_脑网络组研究
通讯作者Xu, Junhai
作者单位1.Tianjin Univ, Coll Intelligence & Comp, Tianjin Key Lab Cognit Comp & Applicat, Tianjin, Peoples R China
2.Chinese Acad Sci, Inst Automat, Brainnetome Ctr, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Dong, Haibin,Li, Na,Fan, Lingzhong,et al. Integrative interaction of emotional speech in audio-visual modality[J]. FRONTIERS IN NEUROSCIENCE,2022,16:13.
APA Dong, Haibin,Li, Na,Fan, Lingzhong,Wei, Jianguo,&Xu, Junhai.(2022).Integrative interaction of emotional speech in audio-visual modality.FRONTIERS IN NEUROSCIENCE,16,13.
MLA Dong, Haibin,et al."Integrative interaction of emotional speech in audio-visual modality".FRONTIERS IN NEUROSCIENCE 16(2022):13.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Dong, Haibin]的文章
[Li, Na]的文章
[Fan, Lingzhong]的文章
百度学术
百度学术中相似的文章
[Dong, Haibin]的文章
[Li, Na]的文章
[Fan, Lingzhong]的文章
必应学术
必应学术中相似的文章
[Dong, Haibin]的文章
[Li, Na]的文章
[Fan, Lingzhong]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。