Exploring the combination of deep-learning based direct segmentation and deformable image registration for cone-beam CT based auto-segmentation for adaptive radiotherapy

06/07/2022
by   Xiao Liang, et al.
12

CBCT-based online adaptive radiotherapy (ART) calls for accurate auto-segmentation models to reduce the time cost for physicians to edit contours, since the patient is immobilized on the treatment table waiting for treatment to start. However, auto-segmentation of CBCT images is a difficult task, majorly due to low image quality and lack of true labels for training a deep learning (DL) model. Meanwhile CBCT auto-segmentation in ART is a unique task compared to other segmentation problems, where manual contours on planning CT (pCT) are available. To make use of this prior knowledge, we propose to combine deformable image registration (DIR) and direct segmentation (DS) on CBCT for head and neck patients. First, we use deformed pCT contours derived from multiple DIR methods between pCT and CBCT as pseudo labels for training. Second, we use deformed pCT contours as bounding box to constrain the region of interest for DS. Meanwhile deformed pCT contours are used as pseudo labels for training, but are generated from different DIR algorithms from bounding box. Third, we fine-tune the model with bounding box on true labels. We found that DS on CBCT trained with pseudo labels and without utilizing any prior knowledge has very poor segmentation performance compared to DIR-only segmentation. However, adding deformed pCT contours as bounding box in the DS network can dramatically improve segmentation performance, comparable to DIR-only segmentation. The DS model with bounding box can be further improved by fine-tuning it with some real labels. Experiments showed that 7 out of 19 structures have at least 0.2 dice similarity coefficient increase compared to DIR-only segmentation. Utilizing deformed pCT contours as pseudo labels for training and as bounding box for shape and location feature extraction in a DS model is a good way to combine DIR and DS.

READ FULL TEXT

page 4

page 6

page 9

page 10

page 11

page 12

research
10/07/2020

Learning Monocular 3D Vehicle Detection without 3D Bounding Box Labels

The training of deep-learning-based 3D object detectors requires large d...
research
08/31/2021

OARnet: Automated organs-at-risk delineation in Head and Neck CT images

A 3D deep learning model (OARnet) is developed and used to delineate 28 ...
research
02/08/2022

Segmentation by Test-Time Optimization (TTO) for CBCT-based Adaptive Radiation Therapy

Online adaptive radiotherapy (ART) requires accurate and efficient auto-...
research
04/19/2019

Deep Q Learning Driven CT Pancreas Segmentation with Geometry-Aware U-Net

Segmentation of pancreas is important for medical image analysis, yet it...
research
06/13/2022

Deep ensemble learning for segmenting tuberculosis-consistent manifestations in chest radiographs

Automated segmentation of tuberculosis (TB)-consistent lesions in chest ...
research
08/25/2018

Organ at Risk Segmentation in Head and Neck CT Images by Using a Two-Stage Segmentation Framework Based on 3D U-Net

Accurate segmentation of organ at risk (OAR) play a critical role in the...
research
04/23/2021

Intentional Deep Overfit Learning (IDOL): A Novel Deep Learning Strategy for Adaptive Radiation Therapy

In this study, we propose a tailored DL framework for patient-specific p...

Please sign up or login with your details

Forgot password? Click here to reset