CASIA OpenIR  > 毕业生  > 硕士学位论文
词向量的优化及其在词对齐中的应用
胡文鹏
2017-05-26
学位类型工程硕士
中文摘要
自然语言存在于人们生活的方方面面,是人类交流最重要的信息载体。利用自然语言处理技术对语言文本进行自动分析处理,已成为智能时代的迫切需求。词语是自然语言处理最基本的研究对象,而其表示是核心问题。作为一种低维稠密的词语表示方法,词向量可以捕捉词语之间更多的潜在特征以及更完备的语义信息,因此词向量研究具有重要的理论意义和应用价值。词对齐旨在辨识语义等价的句对之间的词语对应关系,广泛应用于机器翻译和跨语言信息检索等领域,从而如何快速获得高质量的词语对齐就显得尤为重要。本文主要以词向量的优化及其在词对齐中的应用为研究重点,主要研究内容归纳如下:
 
(1)、提出了一种上下文敏感的动态词向量表示方法
 
目前大部分词向量学习方法都是为一个词训练一个词向量,无法表示多义词。已有的相关工作都是通过为每个词语的不同义项学习不同的词向量表示。本文提出一种偏置的思想进行多义词词表示,即为每一个词训练一个滤波器,通过滤波器量化上下文对该词的影响,动态计算出在给定上下文作用下该词的语义偏置方向,由此可合成多义词在不同语境下表达的语义。实验验证表明该方法可以有效的增强词向量的性能,在标准测试数据集中超过了目前最好水平。
 
(2)、提出了一种基于模式生成的词对齐模型
 
深度神经网络在词对齐的研究中已经取得了较好的成绩,但其复杂的结构极大的降低了训练速度。本文提出一种基于模式生成的词对齐模型,该模型以词向量表示为基础,通过矩阵变换的思想抽取词对齐规则,并生成词对齐模式,利用CNN卷积以及Pooling的方法构建模型,在词向量较强的信息表达能力的保证下,互相对齐的语块可以获得最高的打分。同时该模型去除了深度神经网络中的隐层,极大的降低了计算复杂度。实验表明该方法在简单的初步模型下已经取得较理想的结果。
英文摘要
Natural language exists in every aspect of people's lives and also is the most important carrier of information in human communication. Automatically using Natural Language Processing technologies to analyze language has become an urgent demand of intelligent era. Word embedding as a dense vector representation can express more potential features and more complete semantic information of words. It is one of the important bases of Natural Language Processing research, and its theoretical research is of great significance. Word alignment can identify the relationship between languages. It is an essential corpus for many natural language processing tasks such as machine translation and cross language information retrieval, So how to quickly generate high quality word alignment is particularly important. In this paper, we propose a new representation and training method to enhance the semantic representation ability of word embedding and leverage the semantic representation ability of word embedding to study word alignment. The main contents of this paper are summarized as follows:
 
(1) Propose a model to enchance word embeddings
 
At present, most of the word embedding learning methods are to train one word embedding for one word, which can not be used to express the polysemous words. The existing solutions are to solve this problem by training different word embeddings for different meanings of each polysemous word. This paper proposes a new method to bear on this problem. It differs from most of the related work in that it learns one semantic center embedding and one context bias instead of training multiple embeddings per word type. Experimental results on similarity task and analogy task show that the word representations learned by the proposed method outperform the competitive baselines.
 
(2) Propose one generate pattern model to study word alignment
 
Deep neural network has achieved good results in the study of word alignment, but its complex structure greatly reduces the training speed. This paper proposes one generate pattern model to study word alignment. The model extracts rules for word alignment through the thought of matrix transform and takes the techniques in convolution neural network as reference to construct model. With the guarantee of the strong information expression ability of word embedding, the aligned word block then could get highest score in the given parallel sentences. At the same time, the hidden layer of the neural network is removed by the proposed model, and so the computational complexity is greatly reduced. The experimental results show that the proposed method has achieved satisfactory results when we leverage it to construct a primary model.
关键词词向量 偏置 词对齐 模式生成 模板
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/14670
专题毕业生_硕士学位论文
作者单位1.中国科学院自动化研究所
2.中国科学院大学
推荐引用方式
GB/T 7714
胡文鹏. 词向量的优化及其在词对齐中的应用[D]. 北京. 中国科学院研究生院,2017.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
词向量的优化及其在词对齐中的应用.pdf(1627KB)学位论文 限制开放CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[胡文鹏]的文章
百度学术
百度学术中相似的文章
[胡文鹏]的文章
必应学术
必应学术中相似的文章
[胡文鹏]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。