Semi-Supervised Multi-Task Learning With Chest X-Ray Images
Especially in the medical imaging domain when large labeled datasets are unavailable, discriminative models that require full supervision are inefficacious. By contrast, generative modeling---i.e., learning data generation and classification---facilitates semi-supervised training with limited labeled data. Moreover, generative modeling can be advantageous in accomplishing multiple objectives for better generalization. We propose a novel multi-task learning model for jointly learning a classifier and a segmentor, from chest X-ray images, through semi-supervised learning. In addition, we propose a new loss function that combines absolute KL divergence with Tversky loss (KLTV) to yield faster convergence and better segmentation performance. Based on our experimental results using a novel segmentation model, an Adversarial Pyramid Progressive Attention Network (APPAU-Net), we hypothesize that KLTV can be more effective for generalizing multi-tasking models while being competitive in segmentation-only tasks.
READ FULL TEXT