< Back to previous page

Project

Anatomy-aware Representation Learning: Application to Segment Metastatic Disease from Whole-body MRI (FWOSB141)

Influenced by the rapidly growing availability of data and hardware
improvements, the processing of medical images is a fast growing
field of research. Hardware improvements give rise to higher
resolutions, larger fields of views, shorter acquisition times and a
migration from 2D to 3D imaging. In practice, it is becoming more
challenging for radiologists to analyse the large amount of medical
image data. Assessment of tumor load for metastatic bone disease
(MBD) is of particular difficulty because it often consists of many
small, irregular lesions spread across the whole skeleton. This makes
the manual assessment labor intensive and prone to errors.
Segmentation algorithms using deep learning have the potential to
automatically extract quantitative, prognostic features from the
medical images. Over the last two years, vision transformers (VIT)
are rapidly taking over in many medical imaging tasks. Especially
their excellent capability to capture global context and long-range
interactions is of interest to the medical image community. State-ofthe-art models for segmentation, such as the UNETR, are often
trained on patches to reduce computation costs. This comes at the
cost of contextual information that can be leveraged when
segmenting a patch. We propose a novel representation learning
pipeline that allows for anatomical representation learning. By
developing new contrastive losses, the model will learn a relevant
representation of the different patches within the body.
Date:1 Nov 2022 →  Today
Keywords:Representation learning, Medical image segmentation, Metastatic bone disease
Disciplines:Data visualisation and imaging, Other computer engineering, information technology and mathematical engineering not elsewhere classified, Biomedical signal processing, Biomedical image processing, Diagnostic radiology