TMS-Net: A Segmentation Network Coupled With A Run-time Quality Control Method For Robust Cardiac Image Segmentation
Recently, deep networks have shown impressive performance for the segmentation of cardiac Magnetic Resonance Imaging (MRI) images. However, their achievement is proving slow to transition to widespread use in medical clinics because of robustness issues leading to low trust of clinicians to their results. Predicting run-time quality of segmentation masks can be useful to warn clinicians against poor results. Despite its importance, there are few studies on this problem. To address this gap, we propose a quality control method based on the agreement across decoders of a multi-view network, TMS-Net, measured by the cosine similarity. The network takes three view inputs resliced from the same 3D image along different axes. Different from previous multi-view networks, TMS-Net has a single encoder and three decoders, leading to better noise robustness, segmentation performance and run-time quality estimation in our experiments on the segmentation of the left atrium on STACOM 2013 and STACOM 2018 challenge datasets. We also present a way to generate poor segmentation masks by using noisy images generated with engineered noise and Rician noise to simulate undertraining, high anisotropy and poor imaging settings problems. Our run-time quality estimation method show a good classification of poor and good quality segmentation masks with an AUC reaching to 0.97 on STACOM 2018. We believe that TMS-Net and our run-time quality estimation method has a high potential to increase the thrust of clinicians to automatic image analysis tools.
READ FULL TEXT