CASIA OpenIR  > 毕业生  > 博士学位论文
Alternative TitleResearch on Theory and Methods of Multi-task Learning
Thesis Advisor刘成林 ; 黄开竹
Degree Grantor中国科学院大学
Place of Conferral中国科学院自动化研究所
Degree Discipline模式识别与智能系统
Keyword多任务学习 公共子空间 图拉普拉斯正则化 度量学习 几何保持性 Multi-task Learning Common Subspace Graph Laplacian Regularization Metric Learning Geometry Preserving Property
Abstract多任务学习用于同时学习多个相关任务,通过联合学习,它既保持了任务间的差异性又充分利用其相关性,从而从整体上提高所有任务的学习性能。多任务学习引起了很多学者的关注,基于不同的模型假设提出了大量方法,但仍有很多理论和方法问题有待解决。本文对多任务学习方法进行了深入研究,分别基于公共子空间、图结构约束、和几何保持性等思想,提出了自己的新方法以解决已有方法中存在的问题,并通过理论分析和实验证实了我们所提出方法的有效性。 本文的主要贡献如下: 1. 提出公共有效信息子空间的概念和一种多任务学习框架,并且将该方法应用到度量学习问题中,一方面利用有效信息子空间内低噪声的特点提高每个学习任务的性能,另一方面联合使用多个任务的信息可以更准确地学习该子空间。 2. 提出基于图拉普拉斯正则化的多任务学习方法,通过联合多个任务共同学习,可以得到任务间的关联程度信息,这种关系又对任务的学习过程产生约束,进而通过有效信息在任务间的传播提高学习性能。 3. 提出度量学习中的几何保持性概念及该性质的概率衡量方法。同时将一类利用正则化项构建多任务学习的方法从向量推广到矩阵变量,提出基于Bregman矩阵散度的多任务度量学习框架,并基于该框架利用von Neumann散度构造出“几何保持多任务度量学习”方法。该方法的目标函数是联合凸的,因此具有全局最优解且易于求解。理论分析和实验表明,本方法有利于提高几何保持概率,并提高多任务度量学习的性能。
Other AbstractMulti-task learning considers multiple correlated learning tasks simultaneously. By jointly learning, it exploits the correlation between tasks while preserving their discrepancies, and thus gives a high generalization performance to each task even when the training data for each single task is limited. Despite that multi-task learning has attracted much attention and a lot of methods have been proposed based on various assumptions about the model, some important theoretical and practical issues remain unsolved. This thesis studies into multi-task learning and proposed our new methods based on the idea of common subspace, graph Laplacian regularization, and geometry preserving property respectively. The main contributions are as follows. 1. We propose the concept of common informative subspace and construct a multi-task learning framework based on it, which is then applied to metric learning problem. In this framework, each task benefits from a common subspace with low noise, while the common subspace is learned more accurately with samples of all tasks. 2. We propose a multi-task learning method using a graph Laplacian regularization to couple the related tasks. In this model, the task relationship is obtained from the learning of all tasks, and is then used to regularize all tasks, which improves the overall performance by encouraging information propagation among tasks. 3. We propose the concept of geometry preserving property and geometry preserving probability to measure such a property. Extending the previous methods from the vector regularization to a general matrix regularization, we propose a multi-task metric learning framework using the Bregman matrix divergence. From the framework, we also derive the "geometry preserving multi-task metric learning" using von Neumann divergence. Theintroduced regularization item is jointly convex and the global optimal solution can be easily solved by alternating methods. Theoretical analysis and experiments demonstrate the effectiveness of the proposed method.
Other Identifier201018014628069
Document Type学位论文
Recommended Citation
GB/T 7714
杨沛沛. 多任务学习理论与方法研究[D]. 中国科学院自动化研究所. 中国科学院大学,2013.
Files in This Item:
File Name/Size DocType Version Access License
杨沛沛_多任务学习理论与方法研究.pdf(1498KB)学位论文 暂不开放CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[杨沛沛]'s Articles
Baidu academic
Similar articles in Baidu academic
[杨沛沛]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[杨沛沛]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.