< Terug naar vorige pagina

Publicatie

Deep Learning Models for the Detection and Segmentation of Tumoral Lesions Using PET and MRI

Boek - Dissertatie

The advancement of high-performance computing together with the availability of large medical imaging datasets have enabled the use of deep learning models for medical image analysis, and in particular convolutional neural networks (CNNs) which are currently dominating the field. The main goal of this thesis was to broaden the clinical applicability of CNN models, including the application of these models for the automatic detection and segmentation of breast tumors using magnetic resonance imaging (MRI) and brain tumors using positron emission tomography (PET). This automatic and time-efficient tumor detection and segmentation that can be achieved with CNN models will facilitate further downstream quantitative analysis and thereby contribute to more accurate monitoring of disease progression and treatment response. In addition, we proposed a CNN-based approach to deal with missing MRI data, such that brain tumors can still be segmented with high accuracy despite missing some of the MRI data used for training. In Chapter 3, we developed a CNN model to improve the tumor segmentation with a reduced set of clinical MRI data. A "Teacher-Student" framework was proposed to train an enriched student model by distilling the knowledge of a teacher model which was trained by more informative, multi-sequence MRI data. Once trained, the student model using only single-sequence MRI data closely approximated the performance of a CNN model using multi-sequence MRI data. Our findings showed that cross-modal distillation was able to significantly improve the performance of a student model using only T1-weighted (T1w) MRI for whole tumor and tumor core segmentation compared to a standard CNN model trained with the same T1w MRI dataset. In Chapter 4, we applied CNN models to assist radiologists with the segmentation of breast tumors using T1-weighted dynamic contrast-enhanced MRI. Three independent CNN models were trained for automatic lesion segmentation while combining different MRI data as input. The quality of automatic segmentations was visually inspected and scored according to a 4-point scoring system. Based on these scores, a visual ensemble selection approach was proposed where the segmentation with the highest visual score was retained from the segmentations provided by the three CNN models. This visual ensemble selection resulted in tumor segmentations requiring no further adjustments or less than 25% of the total number of the slices to be adjusted in 77% of the cases. Thus, this approach outperformed any of the three CNN models and achieved comparable results to an inter-observer agreement. It confirmed that the visual ensemble selection approach is a clinically useful tool to significantly reduce the workload of manual segmentations by radiologists. In Chapter 5, we used a CNN model to support the clinical workflow for the analysis of 18F- fluoroethyl -L- tyrosine (18F-FET) PET scans. In addition to automatic detection and segmentation of brain tumors, the appropriate background region was also segmented using a multi-label CNN model. The model allowed an accurate extraction of the maximal lesion and average background uptake, resulting in an accurate tumor-to-background ratio compared to a fully manual approach. In addition, the tumor segmentations by the CNN model provided an estimated tumor volume that closely approximated the tumor volume estimated based on manual segmentations. To support the research efforts of Chapters 3, 4, and 5, two commonly used CNN architectures, DeepMedic and U-Net, were evaluated in Chapter 2 and more specifically the impact of different model configurations on the model performance for MRI-based lesion segmentation was investigated. In addition to the publicly available BraTS 2018 data, in-house data were also used to assess the generalizability of pre-trained CNN models. According to the results we obtained, we opted for a U-Net architecture for the development of our CNN models and used the optimal model configuration from Chapter 2 as a starting point to train the CNN models for the applications of the other chapters. With the CNN-based tools that were developed in the context of this thesis, we aimed to bring deep learning closer to the clinicians and routine clinical use such that the considerable amount of manual segmentation work by a radiologist or nuclear medicine physician can be reduced and the quantitative analysis of MRI and PET data can be facilitated on a larger scale.
Jaar van publicatie:2022
Toegankelijkheid:Open