Multi-Model Medical Image Segmentation Using Multi-Stage Generative Adversarial Networks
Image segmentation is a challenging problem in medical applications. Medical imaging has become an integral part of machine learning research, as it enables inspecting interior human body with no surgical intervention. Much research has been conducted to study brain segmentation. However, prior studies usually employ one-stage models to segment brain tissues, which could lead to a significant information loss. In this paper, we propose a multi-stage Generative Adversarial Network (GAN) model to resolve existing issues of one-stage models. To do this, we apply a coarse-to-fine method to improve brain segmentation using a multi-stage GAN. In the first stage, our model generates a coarse outline for both the background and brain tissues. Then, in the second stage, the model generates a refine outline for the white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). We perform a fusion of the coarse and refine outlines to achieve high results. Despite using very limited data, we obtain an improved Dice Coefficient (DC) accuracy of up to 5% compared to one-stage models. We conclude that our model is more efficient and accurate in practice for brain segmentation of both infants and adults. In addition, we observe that our multi-stage model is 2.69–13.93 minutes faster than prior models. Moreover, our multi-stage model achieves higher performance with only a few-shot learning, in which only limited labelled data is available. Therefore, for medical images, our solution is applicable to a wide range of image segmentation applications for which convolution neural networks and one-stage methods have failed. This helps to advance the process of analysing brain images, thus providing many advantages to the healthcare system, especially in critical health situations where urgent intervention is needed.
READ FULL TEXT