Computed tomography (CT) as a commonly used tool in cancer analysis, can visualize the complete appearances of tumors and their surrounding tissues at a macroscopic level. Consequently, CT imaging provides an important method for observing and analyzing tumors noninvasively. In recent years, the rapid development of deep learning has greatly changed the existing image analysis methods.
As a data-driven and self-learning model, deep learning needs large amount of training data to achieve a good performance. However, standardized CT images of tumors are very limited, which limites the performance of deep learning in CT image analysis of tumors. To solve this problem, this dissertation proposed new deep learning methods from the following three perspectives: 1) Convert patient-level tumor analysis tasks into voxel-level classification tasks, aiming at enlarging the labelled data amount. 2) Use transfer learning to optimize the training process of deep learning, aiming at avoiding over fitting caused by small training data. 3) Propose a semi-supervised deep learning framework, aiming at using large amount of unlabelled data for network training. The main innovations and contributions of our study are as follows:
1. This dissertation proposed voxel-level classification algorithms based on MV-CNN and CF-CNN. Since standardized tumor CT images are limited in patient-level, this dissertation converted the patient-level tumor analysis task into a voxel-level classification task. Each voxel in the tumor was treated as a training sample, which greatly enlarged the data amount. Afterwards, this dissertation proposed the MV-CNN and CF-CNN networks that were designed specifically for voxel-level classification tasks. The MV-CNN can extract multi-scale information from three orthogonal perspectives of CT images. In the CF-CNN, this dissertation proposed a central pooling layer, which performd adaptive non-uniform pooling according to the spatial location of the image voxels. In addition, the CF-CNN combined multi-scale two-dimensional information and three-dimensional information through two CNN branches. When choosing training samples, this dissertation proposed a weighted training sample selection strategy according to the difficulty of classification for each voxel. In lung tumor segmentation task, the MV-CNN achieved Dice coefficient of 77.67% in the LIDC dataset, which was 8% higher than the traditional segmentation methods. In the two datasets LIDC and GDGH, the CF-CNN achieved Dice coefficient of over 80%, which was 6% higher than traditional segmentation methods, and 3% higher than other deep learning models such as U-Net.
2. This dissertation proposed a transfer learning method for CT image analysis of tumor. To solve the over fitting of deep learning caused by limited training data, this dissertation initialized part of the newly designed CNN layers using another network that has been pre-trained in natural images. Then, this dissertation used a two-stage training approach to finetune the newly designed CNN on the target dataset. Based on this method, this dissertation achieved predicting the EGFR mutation status of lung cancer through CT images. In an independent testing set consisting of 241 lung cancer patients, this method had an AUC of 0.81, which was 17% higher than the commonly used methods in this task. In addition, through the visualization algorithm of CNN, this model could find the suspicious region in tumor that could probably occur EGFR mutation. This suspicious region can assist clinicians choosing the biopsy location.
3. This dissertation proposed a semi-supervised learning framework. To train the deep learning model with very limited data for tumor prognostic analysis, this dissertation used auto-encoder to learn features from a relatively larger unlabelled dataset, and then combined small amount of labelled dataset to improve the predictive performance of the deep learning model. Based on this semi-supervised learning framework, this dissertation designd two networks: RCAE and DenseCAE, and applied them to the predict the overall survival of lung cancer and the recurrence of ovarian cancer. When predicting the overall survival of lung cancer, this method achieved C-Index=0.710, which was 8% higher than the published method. When predicting the recurrence of ovarian cancer, this method achieved C-Index=0.713. Further Kaplan-Meier analysis, log-rank test, and calibration curve analysis also demonstrated the effectiveness of this method.
Focusing on solving the problems caused by the small amount of standardized CT images of tumors, this dissertation proposed three methods including expanding the labelled dataset, changing the network training process, and using unlabelled data, and proposed corresponding deep learning models that were suitable for different scenes. These methods provide effective solutions for clinical tasks, such as tumor segmentation, predicting EGFR mutation status in lung cancer, predicting overall survival of lung cancer, and predicting recurrence of ovarian cancer.
|Keyword||计算机断层扫描（ct） 深度学习 肿瘤分割 半监督学习 预后分析|
|Sub direction classification||医学影像处理与分析|
|王硕. 基于深度学习的小样本肿瘤CT影像分析算法研究[D]. 中国科学院自动化研究所. 中国科学院大学,2019.|
|Files in This Item:|
|Recommend this item|
|Export to Endnote|
|Similar articles in Google Scholar|
|Similar articles in Baidu academic|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.