Multi-task learning considers multiple correlated learning tasks simultaneously. By jointly learning, it exploits the correlation between tasks while preserving their discrepancies, and thus gives a high generalization performance to each task even when the training data for each single task is limited. Despite that multi-task learning has attracted much attention and a lot of methods have been proposed based on various assumptions about the model, some important theoretical and practical issues remain unsolved. This thesis studies into multi-task learning and proposed our new methods based on the idea of common subspace, graph Laplacian regularization, and geometry preserving property respectively. The main contributions are as follows. 1. We propose the concept of common informative subspace and construct a multi-task learning framework based on it, which is then applied to metric learning problem. In this framework, each task benefits from a common subspace with low noise, while the common subspace is learned more accurately with samples of all tasks. 2. We propose a multi-task learning method using a graph Laplacian regularization to couple the related tasks. In this model, the task relationship is obtained from the learning of all tasks, and is then used to regularize all tasks, which improves the overall performance by encouraging information propagation among tasks. 3. We propose the concept of geometry preserving property and geometry preserving probability to measure such a property. Extending the previous methods from the vector regularization to a general matrix regularization, we propose a multi-task metric learning framework using the Bregman matrix divergence. From the framework, we also derive the "geometry preserving multi-task metric learning" using von Neumann divergence. Theintroduced regularization item is jointly convex and the global optimal solution can be easily solved by alternating methods. Theoretical analysis and experiments demonstrate the effectiveness of the proposed method.
修改评论