CASIA OpenIR  > 毕业生  > 硕士学位论文
基于示例的图像分割方法研究
其他题名Researches on Exemplar Based Image Segmentation
李双双
2015-05-22
学位类型工学硕士
中文摘要图像分割是计算机视觉、图像处理、模式识别等领域的研究热点,是构建众多视觉应用系统的重要工具,在工业、军事、医学、公共安全、娱乐等诸多领域有着十分广泛的应用。图像分割一直受到国际学术界和工业界的广泛关注,并涌现出大量有价值的研究工作。但是,由于图像视觉模式的复杂性以及视觉模式分组的不确定性,现有方法在通用性、可扩展性、分割结果的语义相关性等方面还存在诸多不足。至今仍缺乏一种通用的方法可以解决各类图像分割问题。处理上述问题的一种有效途径是在图像分割中引入监督信息,降低视觉模式描述的难度和消除视觉模式分组的不确定性。因此,从模式分类的角度,如何在图像分割任务中提供和利用监督信息是一个需要深入研究的问题,有着十分重要的理论研究价值和潜在的实际应用价值。 为此,本文分析了图像分割的主要难点,首先对现有的分割方法进行了系统的总结。为了克服现有图像分割方法的缺陷,本文提出基于示例的图像分割方案,即根据分割示例自动地完成类似图像的分割。分割示例提供了图像分割所需的监督信息。因此,在基于示例的图像分割框架下,本文重点研究如何有效地传递示例图像所提供的监督信息。本文的主要工作和贡献总结如下: 1.提出了一种基于示例上下文信息和稀疏表达的图像分割方法。其核心思想是将图像分割问题转化为监督分类问题,并利用上下文信息及稀疏表达来确定各像素点的类别。具体地,该方法由字典构造和传递分割两部分所构成。在字典构造模块,采用过分割方法将示例图像分成若干超像素,并利用超像素及其上下文信息构造字典。该过程针对前景目标和背景图像分别进行,以便于增强字典的判别能力。在传递分割的模块,对测试图像的过分割结果,采用基于前景和背景上下文字典的联合稀疏表达,并将重构误差作为似然损失,最终用图切割得到分割结果。实验结果表明了所提方法的有效性和潜在的实际应用价值。 2. 提出了一种基于样条回归的图像分割方法。其核心思想是将图像分割视为超像素标签回归问题。具体地,在类别标签回归的框架下,该方法利用示例图像学习一个薄板样条函数,并利用该函数对测试图像的超像素的类别标签进行预测。与传统的线性回归函数和核岭回归函数相比,薄板样条函数在保证光滑性的同时能更精确地拟合数据,且该函数不包含对数据敏感的模型参数。对比实验分析表明,该方法可取得更准确的分割结果。
英文摘要Image segmentation is one of the most fundamental tools to construct various vision-based application systems. It has been widely used in many fields, such as vision-aided industrial inspection, visual object recognition for military applications, medical image processing, and image/video content analysis for entertainment and public security, etc. Image segmentation has drawn extensive attention from the academic and industrial circles. There have been a large number of valuable researches on this task over the past few decades. However, due to the complexity of the visual modeling and the ambiguity of pattern grouping, most existing methods lack the generality and scalability for various image segmentation problems, as well as the semantic correlation for object segmentation. To address the above issues, we employ the supervised information to guide the segmentation. From a classification perspective, however, “in which way to supply” and “how to exploit” the supervised information are fundamental issues. Such cases are worthy of being further studied to develop advanced image segmentation methods, that is the core motivation of our thesis. By reviewing the state-of-the-art methods, we propose an exemplar based image segmentation framework. Given a segmented example, our goal is to automatically segment new similar images from the following two aspects: (1) how to explore and exploit the supervised information hidden in the exemplar; (2) how to transfer the supervised information to the new images. The main contributions of this thesis can be highlighted as follows: 1. We propose a new framework for the image segmentation, which is constructed on exemplar based contextual sparse representation. This framework is addressed as a supervised classification problem from two stages: dictionary construction and segmentation transferring. In the first stage, there are two sub-stages: (1) the exemplary image is over-segmented into superpixels; (2) the superpixels of foreground and background are utilized respectively to construct two contextual dictionaries. The above treatments would facilitate the representation ability as well as the discriminative power of the dictionaries. In the second stage, the two contextual dictionaries are concatenated together, and then employed to reconstruct the superpixels of the new image by way of sparse representation. The reconstruction errors are further treated as the likelihood energy, which will be finally integrated...
关键词图像分割 示例 稀疏表达 字典学习 样条回归 Image Segmentation Exemplar Based Image Segmentation Sparse Representation Dictionary Construction Spline Regression
语种中文
文献类型学位论文
条目标识符http://ir.ia.ac.cn/handle/173211/7767
专题毕业生_硕士学位论文
推荐引用方式
GB/T 7714
李双双. 基于示例的图像分割方法研究[D]. 中国科学院自动化研究所. 中国科学院大学,2015.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
CASIA_20122801462804(6328KB) 暂不开放CC BY-NC-SA
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[李双双]的文章
百度学术
百度学术中相似的文章
[李双双]的文章
必应学术
必应学术中相似的文章
[李双双]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。