CASIA OpenIR  > 复杂系统认知与决策实验室
Multi-source domain adaptation method for textual emotion classification using deep and broad learning
Peng, Sancheng1; Zeng, Rong2; Cao, Lihong1; Yang, Aimin3; Niu, Jianwei4; Zong, Chengqing5; Zhou, Guodong6
Source PublicationKNOWLEDGE-BASED SYSTEMS
ISSN0950-7051
2023-01-25
Volume260Pages:9
Corresponding AuthorCao, Lihong(201610130@oamail.gdufs.edu.cn)
AbstractExisting domain adaptation methods for classifying textual emotions have the propensity to focus on single-source domain exploration rather than multi-source domain adaptation. The efficacy of emotion classification is hampered by the restricted information and volume from a single source domain. Thus, to improve the performance of domain adaptation, we present a novel multi-source domain adaptation approach for emotion classification, by combining broad learning and deep learning in this article. Specifically, we first design a model to extract domain-invariant features from each source domain to the same target domain by using BERT and Bi-LSTM, which can better capture contextual features. Then we adopt broad learning to train multiple classifiers based on the domain-invariant features, which can more effectively conduct multi-label classification tasks. In addition, we design a co-training model to boost these classifiers. Finally, we carry out several experiments on four datasets by comparison with the baseline methods. The experimental results show that our proposed approach can significantly outperform the baseline methods for textual emotion classification.(c) 2022 Published by Elsevier B.V.
KeywordMulti-domain Emotion classification BERT Broad learning Bi-LSTM
DOI10.1016/j.knosys.2022.110173
Indexed BySCI
Language英语
Funding ProjectNational Natural Sci- ence Foundation of China ; Ministry of Education of Humanities and Social Science project ; [61876205] ; [20YJAZH118]
Funding OrganizationNational Natural Sci- ence Foundation of China ; Ministry of Education of Humanities and Social Science project
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000905783200001
PublisherELSEVIER
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/51109
Collection复杂系统认知与决策实验室
Corresponding AuthorCao, Lihong
Affiliation1.Guangdong Univ Foreign Studies, Lab Language Engn & Comp, Guangzhou 510006, Peoples R China
2.South China Normal Univ, Guangdong Prov Key Lab Nanophoton Funct Mat & Devi, Guangzhou 510006, Peoples R China
3.Lingnan Normal Univ, Sch Comp Sci & Intelligence Educ, Zhanjiang 524048, Peoples R China
4.Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
5.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
6.Soochow Univ, Sch Comp Sci & Technol, Suzhou, Peoples R China
Recommended Citation
GB/T 7714
Peng, Sancheng,Zeng, Rong,Cao, Lihong,et al. Multi-source domain adaptation method for textual emotion classification using deep and broad learning[J]. KNOWLEDGE-BASED SYSTEMS,2023,260:9.
APA Peng, Sancheng.,Zeng, Rong.,Cao, Lihong.,Yang, Aimin.,Niu, Jianwei.,...&Zhou, Guodong.(2023).Multi-source domain adaptation method for textual emotion classification using deep and broad learning.KNOWLEDGE-BASED SYSTEMS,260,9.
MLA Peng, Sancheng,et al."Multi-source domain adaptation method for textual emotion classification using deep and broad learning".KNOWLEDGE-BASED SYSTEMS 260(2023):9.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Peng, Sancheng]'s Articles
[Zeng, Rong]'s Articles
[Cao, Lihong]'s Articles
Baidu academic
Similar articles in Baidu academic
[Peng, Sancheng]'s Articles
[Zeng, Rong]'s Articles
[Cao, Lihong]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Peng, Sancheng]'s Articles
[Zeng, Rong]'s Articles
[Cao, Lihong]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.