CASIA OpenIR  > 毕业生  > 博士学位论文
基于隐变量模型的自然语言处理解释方法研究
江忠涛
2024-05-14
页数164
学位类型博士
中文摘要

近年来,以神经网络为代表的深度学习技术在自然语言处理领域取得了突破性进展。神经网络本质上是一个复杂的非线性函数,其参数和表示都不具有明确的物理意义。这种黑盒特性使得神经网络的决策过程难以理解,输出难以控制,极大地限制了其在实际场景的应用。在这种背景下,可解释性以及解释方法研究应运而生。其中,基于隐变量模型的解释方法具有坚实的理论基础,同时能有效地建模人类决策过程中不可观测的隐式解释,因此逐渐得到了学术界的广泛关注。

隐变量模型是建立在可观测变量和隐变量上的统计模型。在可解释性研究中,可观测变量为任务数据,而隐变量控制了这些数据的生成,因此可作为数据的解释。在隐变量模型中,作为解释的载体,解释生成分布描述了解释在给定输入时的概率。围绕着解释生成分布,基于隐变量模型的解释方法可分为应用和理论两个层面。面向特定任务和模型,应用层面的解释方法的目标是学习得到解释生成模型,其挑战在于设计可理解性高的匹配任务的解释隐变量形式。理论层面的解释方法的目标是直接研究解释生成分布的理论性质,其挑战在于设计合理且便于分析的隐变量模型结构,以及在解释生成分布未知的情况下分析其理论性质。

本文面向自然语言处理的不同任务和场景,对以上两个层面的解释方法进行研究。首先,针对应用层面的挑战,本文面向自然语言推理和情感分析这两个代表性的自然语言处理任务,研究可理解性高的隐变量形式及相应的解释方法:

面向自然语言推理的短语对齐事后解释方法:句子对之间的词和短语对齐是自然语言推理任务决策的重要依据。然而已有工作要么采用词级别的特征归因等不匹配的解释形式,要么采用可解释性较差的互注意力机制建模句子间的对齐关系。为此,本文基于监督学习场景的隐变量模型框架,将解释隐变量定义为形式为二元掩码矩阵的对齐,从而提出了一种事后解释生成方法以获取自然语言推理模型的对齐解释。该解释方法基于保真度、简洁性和连续性三个优化目标。实验结果证明,所提出的解释方法生成的解释具有更好的忠实性和可理解性。基于所提出的解释方法,本文对已有的三种典型的基于互注意力的自然语言推理模型在对齐层面进行了诊断,并发现这些模型的性能主要取决于其句间短语对齐性能。最后,本文根据解释结果提出了基于语义角色标注的改进策略。该策略可以大幅提升模型在对抗数据集上的推理性能。

面向情感分类的语义树自解释方法:情感分析任务的关键是情感组合,即一个成分的情感由其子成分的语义以及语义组合规则确定。自然地,情感组合通常通过树状结构进行建模。然而,已有的树状结构缺乏对功能性词和短语的刻画,并不能完整地建模和解释情感组合。为此,本文提出了语义树以及对应的情感组合文法,以完整地建模和解释情感组合现象。由于数据中并没有语义树的标注,本文基于监督学习场景的隐变量模型框架,将解释隐变量定义为语义树,从而提出了情感组合模块,以学习语义树解析器。情感组合模块既能为给定的输入预测其极性标签,还能为该预测生成语义树解释。实验证明,在添加了情感组合模块后,模型在情感分类任务以及跨领域迁移任务上的性能有明显提升。此外,生成的语义树解释表明模型能很好地解释复杂的情感组合现象。


其次,针对理论层面的挑战,面向大语言模型这种自然语言处理的基础模型,本文基于隐变量模型解释了其上下文生成和学习能力,并基于解释结果尝试使用校准的方法提升上下文学习的性能和稳定性。具体来说:

语言模型的上下文生成能力及其主题泛化性研究:上下文生成/学习能力是大语言模型最重要的涌现能力之一。给定包含少数样本的上下文提示,大语言模型可以隐式地学习这些样本的模式,并在后续的补全中复用这些模式。本文系统地研究了语言模型的上下文生成能力。首先,本文提出了一种符合真实数据的隐变量模型以描述数据的预训练分布,并对上下文生成现象进行合理的形式化。该模型的隐变量为文本数据的文体和大纲。然后,从理论上证明,如果语言模型能较好地逼近预训练分布,则在给定上下文提示时,解释生成分布随着提示中样本数的增大依概率收敛到上下文样本的真实文体和主题的单点分布。宏观上,该性质的表现即为实验中观察到的上下文生成能力。最后,本文探究了数据和模型的不同设置对上下文生成和其主题泛化能力的影响。实验结果发现,数据组合性、重复主题的占比、模型的参数量以及模型窗口大小都在决定上下文生成的主题泛化上起到了重要作用。这些结论可以对数据集构建和模型结构设计有很大启发。值得注意的是,由于上下文生成是上下文学习的基础,因此本研究的结论也能很方便地迁移到上下文学习。

语言模型上下文学习的稳定性分析及其校准:已有研究普遍发现大语言模型的上下文学习的性能对提示设置不稳定。为此,本文结合上一工作中基于隐变量模型的理论结果,系统地解释了该现象源自于样本较少时语言模型对文体和任务主题推断的方差较大。同时,通过定性和定量分析,本文还发现上下文学习的不稳定性主要表现为标签偏移,即语言模型的上下文标签边缘分布对提示设置很敏感。为了缓解这个现象,本文提出了生成校准方法。该方法采用蒙特卡洛采样估计上下文标签边缘分布,并使用该估计结果对上下文预测分布进行校准。实验结果表明,在不同的数据集、语言模型以及样本数、提示设置下,生成校准的性能普遍远超上下文学习和现有的校准方法,并和基于大规模数据的复杂提示优化算法性能相当。同时,生成校准对上下文提示中样本的选择以及排列顺序表现出较好的鲁棒性。

英文摘要

Recently, deep learning techniques represented by neural networks have made breakthrough progress in the field of natural language processing (NLP). Essentially, neural network is a complex nonlinear function, where its parameters and representations have no clear physical senses. This black-box characteristic not only hides the decision-making process of neural networks, but also prevent an control effective control of their outputs, which greatly limits their utilization in practice. Under this context, researches on interpretability and explanation methods emerge. Among them, explanation methods based on the latent variable model (LVM) have received growing attention because LVM has a solid theoretical foundation and is able to effectively model unobservable latent explanations in the human decision-making process.

LVM is a statistical model based on observable variables and latent variables. In the context of interpretability, the observable variables are the task data, and the latent variables are the explanations that control the generation of them. In LVMs, as the carrier of explanations, the explanation generative distribution describes the probability of explanations given the input. Focusing on the explanation generative distribution, LVM-based explanation methods can be divided into two levels: application and theory. The goal of application-level explanation methods is to learn explanation generative model for specific tasks and models, and the challenge lies in designing a highly understandable latent variable form with respect to the target task. On the other side, the goal of theoretical level explanation methods is to directly study theoretical properties of the explanation generative distribution, and the challenge lies in designing a reasonable and easy-to-analyze LVM structure and then analyzing properties of the unknown explanation generative distribution. 

This paper aims to study the explanation methods at the above two levels for different tasks and scenarios of NLP. Firstly, to address the application-level challenge, this study focuses on two basic NLP tasks: natural language inference (NLI) and sentiment analysis, studying highly understandable latent variable forms and corresponding explanation methods:

Phrase alignment post-hoc explanation method for natural language inference: Word and phrase alignment between sentence pairs forms the decision-making basis in NLI tasks. However, existing works either use mismatched explanation forms such as word-level attribution, or utilize co-attention mechanism with poor interpretability to model the alignment between sentences. To address this, based on LVM in the supervised learning scenario, this study defines the explanation latent variable as the alignment in the form of a binary mask matrix, thereby proposing a post-hoc explanation method to obtain the alignment explanation of NLI models. The explanation method is based on three optimization goals: fidelity, simplicity and continuity. Experimental results demonstrate that the proposed explanation has better fidelity and understandability. Based on the proposed explanation method, this study diagnoses three existing typical co-attention-based NLI models at the alignment level, and finds that the performance of them mainly depend on their cross-sentence phrase alignment performance. Finally, this study proposes a simple strategy based semantic role labeling according to the explanation results. This strategy can significantly improve the performance of NLI models on adversarial datasets.

Semantic tree self-explanatory method for sentiment classification: Sentiment composition is the key to the task of sentiment analysis. In the context of sentiment composition, the sentiment of a constituent is determined by the semantics of its sub-constituents and semantic composition rules. Naturally, sentiment composition is often modeled through tree structures. However, existing tree structures lack the characterization of functional words and phrases, and hence cannot explain sentiment composition completely. To this end, this study proposes semantic tree and its corresponding sentiment composition grammar to completely model and explain sentiment composition. Since there is no annotation of semantic trees in the data, based on LVM in the supervised learning scenario, this study defines the explanation latent variable as the semantic tree, thereby proposing sentiment composition module to learn the semantic tree parser without annotations. Sentiment composition module not only can predict the sentiment label for a given input, but also be able to generate a semantic tree explanation for that prediction. Experimental results show that after adding sentiment composition module, the accuracy performance on sentiment classification in both vanilla and cross-domain settings is significantly improved. Furthermore, the generated semantic tree explanation shows that the model can explain complex sentiment combination phenomena well.

Secondly, to address the theoretical level challenge, based on LVM, this study focuses on the the foundational language models of NLP, exploring the in-context generation and learning abilities of them. Based on the explanation results, this study attempts to use the calibration method to improve the performance and stability of in-context learning. To be concrete:

On the in-context generation and topic generalization ability for language models: In-context generation/learning ability is one of the most important emergent abilities of large language models. Given an in-context prompt concatenating few-shot examples, the language model can implicitly infer the pattern of these examples and reuse it in subsequent completion. this study investigates the in-context generation ability of language models. First of all, this study proposes a LVM that conforms to real data to describe the data pre-trained distribution, and formalizes in-context generation phenomenon reasonably. The latent variables of this model refer to the mode and outline of the text. Next, theoretically, it is proven that if the language model can approximate the pre-trained distribution well, then when given in-context prompts, the explanation generative distribution will converge to the single-point distribution on the true mode and topic of the in-context samples in probability as the number of samples increases. An a macro level, the manifestation of this property is the in-context generation ability observed in the experiment. Finally, this study explores the impact of different settings of data and models on in-context generation and the corresponding topic generalization ability. Experimental results suggest that the combination of data, the proportion of repetitive topics, the number of model parameters, and the model's window size play an important role in determining the topic generalization of in-context generation. These conclusions provide insights for both dataset and model design. Note that in-context generation is the basis of in-context learning, so the conclusions can be easily transferred to in-context learning.

On the in-context learning stability analysis and calibration for language models: Existing studies have generally found that the performance of in-context learning is unstable with respect to the prompt setting. To this end, according to the theoretical results based on LVM in the above work, this study systematically explains that this phenomenon originates from the large variance of the language model's inference to the mode and topic when there are only a few samples. Meanwhile, through qualitative and quantitative analysis, this study also find that the instability mainly poses label shift, that is, the in-context label marginal distribution is very sensitive to the prompt setting. To alleviate this, this study proposes generative calibration method, which employs Monte-Carlo sampling to estimate the in-context label marginal distribution and uses this estimate to calibrate the in-context prediction distribution. Experimental results under different datasets, language models, numbers of shots, and prompt settings show that the performance of generative calibration generally exceeds in-context learning and existing calibration methods, and is competitive to the performance of complex prompt optimization algorithms based on large-scale data. Furthermore, generative calibration is robust with respect to the selection and order of samples in the in-context prompt.

关键词解释方法,自然语言处理,隐变量模型
语种中文
七大方向——子方向分类自然语言处理
国重实验室规划方向分类语音语言处理
是否有论文关联数据集需要存交
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/57252
专题毕业生_博士学位论文
推荐引用方式
GB/T 7714
江忠涛. 基于隐变量模型的自然语言处理解释方法研究[D],2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
基于隐变量模型的自然语言处理解释方法研究(3157KB)学位论文 限制开放CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[江忠涛]的文章
百度学术
百度学术中相似的文章
[江忠涛]的文章
必应学术
必应学术中相似的文章
[江忠涛]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。