CASIA OpenIR  > 毕业生  > 博士学位论文
统计学习方法在多示例学习及特征抽取中的应用
其他题名Statistical Learning Methods with its applications to Multiple Instance Learning and Feature Extraction
黄羿衡
2012-05-30
学位类型工学博士
中文摘要统计学习理论在低维独立同分布采样的数据上面有很好的理论性质及应用效果。随着应用层面的扩大,结构化数据以及有各种复杂约束的数据集合层出不穷。研究人员急需将传统的统计学习方法推广到这一类数据上面。多示例学习是针对某种应用驱动的结构化数据专门提出的一类新的机器学习算法,它主要应用在图像分类,文本分类,计算机安全,以及药物活性研究等各个领域。与一般的监督学习不一样的是,在多示例学习当中,只有样本包的标签是已知的,在正包中的所有样本的标签都是未知的。通常处理多示例学习的算法都假设包内的样本是独立同分布的,但是在实际运用中这点假设并不成立,实际当 中包内的数据有着很强的耦合相关关系。在我们的论文里面利用了这种相关关系,包内的负样本和正样本被当作一些相关对来进行处理。据此,我们提出了新的有效的特征映射对包进行重新描述,并且从理论上证明了这种映射的必要性,最后从实验上展示了新算法的有性。传统的支持向量机由于有理论保障的泛化能力得到了很广泛的应用。但是支持向量机的一个缺陷是所得的结果缺乏稀疏性。因此,支持向量机不能被应用到特征抽取上面。l0范数支持向量机有着很好的稀疏性,但是由于它是一个组合优化问题在计算上通常是不可行的。在我们的论文里,1 范数和无穷范数被结合起来一起给出了l0范数的一个 上界,最终得到的约束区域有着比1 范数约束多很多的极点。这些极点处对应的都是稀疏解。一般来说,约束区域有着越多的稀疏极点,最终的解将会越稀疏。于是,通过在1 范数和无穷范数共同约束下的约束区域上来最优化转折损失函数,可以得到稀疏的解。有趣的是,虽然仅仅用无穷范数来做为约束区域我们无法得到稀疏的解,但它的引入可以提高1 范数约束区域的稀疏性。最终的解随着参数的变化成分段线性变化,我们的算法可以给出参数在0到无穷范围内变化的整个分段线性路径,并以此路径提高交叉验证的效率,这种提高对模型选择有很大的好处。分段线性性质的严格证明在我们的论文中被给出,试验结果表明了新的算法和一般的特征抽取方式相比,有着相近的泛化性能和更高的稀疏性。
英文摘要The statistical learning theory has theoretical guaranteed performance results,and is widely used in low dimensionality data sets. According to the enlarged application need, structured data and some complex data set with new features emerge in an endless stream. It becomes seriously to apply the statistical learning theory in these application scopes. Multiple instance learning is a newly emerged region in machine learning. It has received increasing amount of research interest in machine learning recent years for its wide applications in image classification, text categorization, computer security, etc. Unlike supervised learning, in MIL, only the labels of bags are known, the instance labels in positive bags are not available. Many algorithms make the assumption that the instances in the bags are i.i.d samples, but this may not true in practical applications. In this paper, we treat the negative instances in the positive bag as pairwise partners of the positive instances, by using this correlation information, efficient feature mapping is built to re-describe the bag. Experiment results show that this description is efficient in real world applications. The standard support vector machine (SVM) is celebrated for its theoretically guaranteed generalization performance. However, it lacks sparse and thus cannot be used for feature extraction. Zero norm SVM is ideal in the sense of sparsity while its optimization is prohibitive due to the combinatorial nature of zero norm. In this paper, 1 norm and infinite norm constraints are employed simultaneously to relax the zero norm while keep its sparsity. The resulted constraint regions possess much more sparse vertices than that of the 1 norm. Generally, the more sparse vertices the constraint regions have, the sparser the solution will be. Therefore, more parsimonious model can be obtained via the combination of 1 and infinite norm. Interestingly enough, although infinite norm alone does not lead to sparse results, it helps to enhance the sparsity of 1 norm regularization. The optimal solution has a favorable piecewise linearity property, based on which the whole solution path can be obtained, and this greatly facilitates model selection. The strict proof for piecewise linearity is given in this paper. Experimental results demonstrate that our approach offers comparable prediction accuracy with significantly higher sparsity.
关键词统计学习理论 多示例学习 耦合对 稀疏支持向量机 分段线性路径 Statistical Learning Theory Multiple Instance Learning Paiwise Partners Sparse Support Vector Machine Piecewise Linear Path
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/6458
专题毕业生_博士学位论文
推荐引用方式
GB/T 7714
黄羿衡. 统计学习方法在多示例学习及特征抽取中的应用[D]. 中国科学院自动化研究所. 中国科学院研究生院,2012.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CASIA_20091801462802(4422KB) 暂不开放CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[黄羿衡]的文章
百度学术
百度学术中相似的文章
[黄羿衡]的文章
必应学术
必应学术中相似的文章
[黄羿衡]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。