CASIA OpenIR  > 模式识别国家重点实验室  > 自然语言处理
Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation
Wang, Kexin1,2; Zhou, Yu1,2,3; Zhang, Jiajun1,2; Wang, Shaonan1,2; Zong, Chengqing1,2
发表期刊ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING
ISSN2375-4699
2020-05
期号4页码:19
文章类型期刊论文
摘要

Dependency-based graph convolutional networks (DepGCNs) are proven helpful for text representation to handle many natural language tasks. Almost all previous models are trained with cross-entropy (CE) loss, which maximizes the posterior likelihood directly. However, the contribution of dependency structures is not well considered by CE loss. As a result, the performance improvement gained by using the structure information can be narrow due to the failure in learning to rely on this structure information. To face the challenge, we propose the novel structurally comparative hinge (SCH) loss function for DepGCNs. SCH loss aims at enlarging the margin gained by structural representations over non-structural ones. From the per- spective of information theory, this is equivalent to improving the conditional mutual information of model decision and structure information given text. Our experimental results on both English and Chinese datasets show that by substituting SCH loss for CE loss on various tasks, for both induced structures and structures from an external parser, performance is improved without additional learnable parameters. Furthermore, the extent to which certain types of examples rely on the dependency structure can be measured directly by the learned margin, which results in better interpretability. In addition, through detailed analysis, we show that this structure margin has a positive correlation with task performance and structure induction of DepGCNs, and SCH loss can help model focus more on the shortest dependency path between entities. We achieve the new state-of-the-art results on TACRED, IMDB, and Zh. Literature datasets, even compared with ensemble and BERT baselines.

关键词Text representation graph convolutional networks loss function
DOI10.1145/3387633
收录类别SCI
语种英语
WOS记录号WOS:000582616900012
七大方向——子方向分类自然语言处理
引用统计
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/39115
专题模式识别国家重点实验室_自然语言处理
通讯作者Wang, Kexin
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation, CAS
2.University of Chinese Academy of Sciences, Beijing 100049, P. R. China
3.Beijing Fanyu Technology Co., Ltd
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Wang, Kexin,Zhou, Yu,Zhang, Jiajun,et al. Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation[J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING,2020(4):19.
APA Wang, Kexin,Zhou, Yu,Zhang, Jiajun,Wang, Shaonan,&Zong, Chengqing.(2020).Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation.ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING(4),19.
MLA Wang, Kexin,et al."Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation".ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING .4(2020):19.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Structurally Compara(1553KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang, Kexin]的文章
[Zhou, Yu]的文章
[Zhang, Jiajun]的文章
百度学术
百度学术中相似的文章
[Wang, Kexin]的文章
[Zhou, Yu]的文章
[Zhang, Jiajun]的文章
必应学术
必应学术中相似的文章
[Wang, Kexin]的文章
[Zhou, Yu]的文章
[Zhang, Jiajun]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Structurally Comparative Hinge Loss for Dependency-Based Neural Text Representation.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。