Learning Cross-domain Generalizable Features by Representation Disentanglement
Deep learning models exhibit limited generalizability across different domains. Specifically, transferring knowledge from available entangled domain features(source/target domain) and categorical features to new unseen categorical features in a target domain is an interesting and difficult problem that is rarely discussed in the current literature. This problem is essential for many real-world applications such as improving diagnostic classification or prediction in medical imaging. To address this problem, we propose Mutual-Information-based Disentangled Neural Networks (MIDNet) to extract generalizable features that enable transferring knowledge to unseen categorical features in target domains. The proposed MIDNet is developed as a semi-supervised learning paradigm to alleviate the dependency on labeled data. This is important for practical applications where data annotation requires rare expertise as well as intense time and labor. We demonstrate our method on handwritten digits datasets and a fetal ultrasound dataset for image classification tasks. Experiments show that our method outperforms the state-of-the-art and achieve expected performance with sparsely labeled data.
READ FULL TEXT