CASIA OpenIR  > 脑图谱与类脑智能实验室  > 神经计算与脑机交互
基于深度学习的眼底影像分割与合成方法研究
王搏
2021-05
Pages120
Subtype博士
Abstract

       深度学习方法在眼科疾病诊断中的应用达到了前所未有的高度和规模,为实现眼科疾病智能化分析奠定了良好的基础。其中,多种眼底影像的形态学分割和模态合成是眼底影像智能化分析中最热门的研究方向,而常见的模态包括眼底彩照、光学相干断层扫描 Optical Coherence Tomography, OCT)以及眼底血管荧光造影(Fluorescein Fundus AngiographyFFA)。目前研究中,基于深度学习的眼底影像分割与合成方法已经取得很好的性能,但依然面临着细血管分割困难、组织层拓扑顺序缺失、合成影像中血管或病灶丢失等问题。这些问题极大限制了深度学习方法在眼底影像智能化分析中的发展与临床应用。因此,为克服上述眼底影像分割和合成中的问题,本文基于眼底彩照和 OCT 对视网膜血管和组织层的全自动分割方法进行研究,同时对 FFA 图像的跨模态合成进行研究。本文主要的内容和创新点如下:

1、用于眼底彩照血管分割的双编码 U-Net 网络

针对先前血管分割方法中忽略空间信息,而导致细血管的分割性能下降的问题,本文提出了用于眼底彩照血管分割的双编码 U-Net 网络模型。该模型具有语义和空间两个通道。其中,多尺度卷积的语义通道能够捕获更多的感受野,具有大卷积核的空间通道能够保留更多空间信息。其次,该模型加入了特征融合模块和注意力跳层模块,用于合并与增强语义和空间通道提取的特征。最后,该模型设计了边界自适应的结构项损失,通过自适应地对细血管和血管边界赋予较大的交叉熵损失空间权重,使得模型在训练时更加关注细血管和血管边界,克服了其分割精度不高的难题。研究结果显示,本文提出的双编码 U-Net 网络模型对视网膜眼底彩照中血管的分割精度高达 95.65%

2、用于 OCT 组织层分割的边缘感知 U-Net 网络

针对先前 OCT 组织层分割方法都将其视为像素级分割、忽略组织层拓扑顺序,而导致分割精度差的问题,本文视其为边界检测任务并提出了用于 OCT 组织层分割的边缘感知 U-Net 网络框架。该框架具有双任务结构:低层输出用于边界检测,高层输出用于语义分割。其次,该框架增加了边缘感知模块和 U 型特征增强模块分别用于提取纯边缘特征和增加感受野。再者,该框架引入了 Canny 边缘融合模块,将由分割任务获取的边缘信息融合到边界检测任务以达到高精度的边界预测。最后,为保证边界的拓扑解剖顺序,该框架将每个组织层边界建模为竖直坐标分布,同时,基于该分布提出了拓扑保证损失以得到准确的拓扑边界集。研究结果显示,本文提出的边缘感知 U-Net 网络模型在 OCT 组织层分割任务上取得 91.5% 的平均体积重叠率。

3、面向 FFA 跨模态合成的局部自监督 CycleGAN 网络

针对先前眼底彩照到 FFA 合成方法中忽略不同模态局部结构差异大和存在几何失真等问题,本文提出了面向 FFA 跨模态合成的局部自监督 CycleGAN 模型。该模型基于非成对图像翻译 CycleGAN 网络框架,使用局部自监督损失提高 FFA 的合成细节。该损失鼓励输入和生成图片在相同位置局部表征接近,不同位置的局部表征疏远,从而以自监督的方式使得输入和生成的图片在局部细节内容上具有更好的一致性。实验结果表明,同 CycleGAN 相比,局部自监督 CycleGAN 合成的 FFA 在细血管和病灶区域生成上有明显的提升。

Other Abstract

The application of deep learning methods in the diagnosis of ophthalmic diseases has reached an unprecedented height and scale, laying a good foundation for the realization of intelligent analysis of ophthalmic diseases. Among them, the morphological segmentation and modal synthesis of multiple fundus image are the most popular research directions in the intelligent analysis of fundus images. Common modalities include fundus color photography, Optical Coherence Tomography (OCT), and Fluorescein Fundus angiography (FFA). In current research, deep learning-based fundus image segmentation and synthesis methods have achieved good performance, but still face problems such as difficulty in segmentation of thin vessels, missing topological order of retinal layers, and loss of vessels or lesions in synthesized images. These problems greatly limit the development and clinical application of deep learning methods in the intelligent analysis of fundus images. Therefore, in order to overcome the above-mentioned problems in the segmentation and synthesis of fundus images, this paper studies the fully automatic segmentation method of retinal vessels and retinal layers based on fundus color photography and OCT, and at the same time studies the cross-modal synthesis of FFA images. The main content and innovations of this article are as follows:

1. Dual Encoding U-Net for segmentation of retinal vessels in color fundus images

Aiming at the problem of ignoring the spatial information in the previous blood vessel segmentation methods, which leads to the degradation of the segmentation performance of small blood vessels, this paper proposes a Dual Encoding U-Net model for retinal vessel segmentation in fundus color images. The model has two channels: context and spatial. Among them, the context channel of multi-scale convolution can capture more receptive fields, and the spatial channel with a large convolution kernel can retain more spatial information. Secondly, the model adds a feature fusion module and an attention skip module to merge and enhance the features extracted from context and spatial channels. Finally, the model designed a boundary-adaptive structural loss. By adaptively assigning a larger cross-entropy loss spatial weight to the capillary and blood vessel boundaries, the model pays more attention to the capillary and blood vessel boundaries during training, and makes high segmentation accuracy. The research results show that the Dual Encoding U-Net network model proposed in this paper can segment the blood vessels in the retinal fundus color images with an accuracy of up to 95.65%.

2. Boundary Aware U-Net for segmentation of the retinal layer in the OCT images

Aiming at the problem of poor segmentation accuracy due to previous OCT retinal layer segmentation methods that regard it as pixel-level segmentation, ignoring the topological order of the retinal layer, this paper regards it as a boundary detection task and propose Boundary Aware U-Net on OCT retinal layer segmentation. The framework has a dual-task structure: low-level output is used for boundary detection, and high-level output is used for semantic segmentation. Secondly, the framework adds an edge aware module and a U-structure feature enhancement module to extract pure edge features and increase receptive fields, respectively. Furthermore, the framework introduces the Canny edge fusion module to fuse the edge information obtained by the segmentation task to the boundary detection task to achieve high-precision boundary prediction. Finally, in order to ensure the topological anatomy order of the boundary, the framework models each retinal layer boundary as a vertical coordinate distribution. At the same time, a topology guarantee loss is proposed based on this distribution to obtain an accurate topological boundary set. The research results show that the Boundary Aware U-Net model proposed in this paper achieves an average Dice of 91.5% on the OCT retinal layer segmentation task.

3. Local self-supervised CycleGAN network for FFA cross-modal synthesis

Aiming at the problems of ignoring the large differences in local structure of different modalities and the existence of geometric distortion in the previous color fundus images to the FFA synthesis method, this paper proposes a local self-supervised CycleGAN model for FFA cross-modal synthesis. The model is based on the unpaired image translation CycleGAN framework, and uses local self-supervised loss to improve the synthesis details of FFA. This loss encourages the local representation of the input and generated images to be close at the same location, and the local representations of different positions are distant, so that the input and generated images have better consistency in local details in a self-supervised manner. Experimental results show that compared with CycleGAN, FFA synthesized by local self-supervised CycleGAN has a significant improvement in the generation of small retinal vessels and lesion areas.

Keyword眼底影像分割 眼底荧光造影合成 双编码网络 边缘感知网络 局 部自监督损失
Language中文
Sub direction classification医学影像处理与分析
Document Type学位论文
Identifierhttp://ir.ia.ac.cn/handle/173211/44915
Collection脑图谱与类脑智能实验室_神经计算与脑机交互
Recommended Citation
GB/T 7714
王搏. 基于深度学习的眼底影像分割与合成方法研究[D]. 中国科学院大学. 中国科学院大学人工智能学院,2021.
Files in This Item:
File Name/Size DocType Version Access License
Thesis.pdf(75922KB)学位论文 开放获取CC BY-NC-SA
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[王搏]'s Articles
Baidu academic
Similar articles in Baidu academic
[王搏]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[王搏]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.