Medical imaging is one of the most compelling domains for the immediate application of artificial intelligence tools. Recent years have seen not only tremendous academic advancementsesteva2017dermatologist ; gulshan2016development ; rajpurkar2017chexnet but additionally a breadth of applied tools marr2017first ; Walter2018DensitasDensity ; Lagasse2018FDANews ; EnvoyAI2017EnvoyAIPartners .
There has been some emerging attention on joint processing of medical images and radiological free-text reports. wang2018tienet used the public NIH Chest X-ray 14 dataset (wang2017chestx, ) linked with the non-public associated reports to both improve disease classification performance and for automatic report generation. gale2018producing attempted to generate radiology reports while shin2016learning generated disease/location/severity annotations. liu2018learning generated notes, including radiology reports for the Medical Information Mart for Intensive Care (MIMIC) dataset using non-image modalities such as demographics, previous notes, labs, and medications. These works used annotations from either machines (wang2017chestx, ) or humans. However, with a huge influx of imaging data beyond human capacity, parallel records from both imaging and text are not always readily available. We thus would like to bring up the question of whether we can take advantage of unannotated but massive imaging datasets and learn from the underlying distribution of these images.
One natural area that remains unexplored is representation learning across images and reports. The idea of representation learning in a joint embedding space can be realized in multiple ways. Some (pan2011domain, ; chen2016transfer, ) explored statistical and metrical relevance across domains, and some (ganin2016domain, ) realized it as an adversarially determined domain-agnostic latent space. shen2017style ; mor2018universal both used a the latent space for style transfer, in language sentiment and music style, respectively. reed2016learning learned joint spaces of images and their captions, which reed2016generative later used for caption-driven image generation. conneau2017word and grave2018unsupervised also used similar ideas to perform both supervised and unsupervised word-to-word translation tasks. (chung2018unsupervised, ) further aligned cross-modal embeddings through semantics in speech and text for spoken word classification and translation tasks.
A recent dataset, MIMIC- Chest X-ray111Soon to be publicly released. (MIMIC-CXR), carries paired records of X-ray images and radiology reports, and the imaging modality has been explored in rubin2018large . In this work, we explore both the text and image modalities with joint embedding spaces under a spectrum of supervised and unsupervised methods. In particular, we make the following contributions:
We establish baseline results and evaluation methods for jointly embedding radiological images and reports via retrieval and distance metrics.
We profile the impact of supervision level on the quality of representation learning in joint embedding spaces.
We characterize the influence of using different sections from the report on representation learning.
All experiments in this work used the MIMIC-CXR dataset. MIMIC-CXR consists of 473,057 chest X-ray images and 206,563 reports from 63,478 patients. Of these images, 240,780 are of anteroposterior (AP) views, which we focus on in this work. Further, we eliminate all duplicated radiograph images with adjusted brightness or contrast222Commonly produced for clinical needs, leaving a total of 95,242/87,353 images/reports, which we subdivide into a train set of 75,147/69,171 and a test set of 19,825/18,182 images/reports, with no overlap of patients between the two. Radiological reports are parsed into sections and we use either the impression or the findings sections.
For evaluation, we aggregate a list of unique International Classification of Diseases (ICD-9) codes from all patient admissions and ask a clinician to pick out a subset of codes that are related to thoracic diseases. Records with ICD-9 codes in the subset are then extracted, including 3,549 images from 380 patients. This population serves as a disease-related evaluation for retrieval algorithms. Note that this disease information is never provided during training in any setting.
Our overall experimental flow follows Figure 1. Notes are featurized via (1) term frequency-inverse document frequency (TF-IDF) over bi-grams, (2) pre-trained GloVe word embeddings (pennington2014glove, ) averaged across the selected section of the report, (3) sentence embeddings, or (4) paragraph embeddings. In (3) and (4), we first perform sentence/paragraph splitting, and then fine-tune a deep averaging network (DAN) encoder (bird2004nltk, ; cer2018universal, ; iyyer2015deep, ) with the corpus. Embeddings are finally averaged across sentences/paragraphs. The DAN encoder is pretrained on a variety of data sources and tasks and fine-tuned on the context of report sections.
Images are resized to 256256, then featurized to the last bottleneck layer of a pretrained DenseNet-121 model (rajpurkar2017chexnet, ).
PCA is applied onto the 1024-dimension raw image features to obtain 64-dimension features.333 96.9% variance explained
96.9% variance explainedText features are projected into the 64-dimension image feature space. We use several methods regarding different objectives.
Embedding Alignment (EA)
Here, we find a linear transformation between two sets of matched pointsand by minimizing .
Adversarial Domain Adaption (Adv)
Adversarial training pits a discriminator,
, implemented as a 2-layer (hidden size 256) neural network using scaled exponential linear units (SELUs)(klambauer2017self, ), against a projection matrix , as the generator.
is trained to classify points in the joint space according to source modality, andis trained adversarially to fool . Alternatively, minimizes when minimizes .
Procrustes Refinement (Adv + Proc)
On top of adversarial training, we also use an unsupervised Procrustes induced refinement as in conneau2017word .
We also assess how much supervision is necessary to ensure strong performance on these modalities by randomly subsampling our data into supervised and unsupervised samples. We then combine the embedding alignment objective and adversarial training objective functions as and train simultaneously as we vary the fraction trained. Preliminary experiments suggests .
smith2017offline ; conneau2017word ; xing2015normalized all showed that imposing orthonormality on linear projections leads to better performance and stability in training . However, brock2018large suggested orthogonality (i.e., not constraining the norms) can perform better as a regularization. Thus on top of the objectives, we add , where denotes element-wise product and
denotes a column vector of all ones. Scanning through a range showsyields good performance.
We evaluate via cross domain retrieval in the test set : querying in the joint embedding space for closest neighboring images using a report, , or vice-versa, . For direct pairings, we compute the cosine similarity, and where is the rank of the first true pair for (e.g., the first paired image or text corresponding to the query ) in the retrieval list. For thoracic disease induced pairings, we first define the relevance between two entries and as the intersection-over-union of their respective set of ICD-9 codes. Then we calculate the normalized discounted cumulative gain (jarvelin2002cumulated, ) , where denotes the ideal value for
using a perfect retrieval algorithm. All experiments are repeated with random initial seeds for at least 5 times. Means and 95% confidence intervals are reported in the following section.
|bi-gram||Adv + Proc|
|word||Adv + Proc|
|sentence||Adv + Proc|
|paragraph||Adv + Proc|
Retrieval with/without Supervision
Table 1 compares four types of text features and supervised/unsupervised methods. We find that unsupervised methods can achieve comparable results on disease-related retrieval tasks on a large scale () without the need for labeling the chest X-ray images. Experiments show uni-, bi-, and tri-grams yield very similar results and we only include bi-gram in the table. Additionally, we find that the high-level sentence and paragraph embeddings approach underperformed the bi-gram text representation. Although having generalizability (cer2018universal, ), sentence and paragraph embeddings learned from the supervised multi-task pre-trained model may not be able to represent the domain-specific radiological reports well due to the lack of medical domain tasks in the pre-training process. Unsupervised procrustes refinement is occasionally, but not universally helpful. Note that is comparatively small since reports are in general highly similar for radiographs with the same disease types.
The Impact of Supervision Fraction
We define the supervision fraction as the fraction of pairing information provided in the training set. Note the ICD-9 codes are not provided for training even in the fully supervised setting. Figure 2
shows our evaluation metrics for models trained using bi-gram text features and the semi-supervised learning objective for various supervision fractions. A minimal supervision as low as 0.1% provided can drastically improve the alignment quality, especially in terms of cosine similarity and. More annotations further improve the performance measures, but one would almost require exponentially many data points in exchange for a linear increase. That implies the possibility of concatenating a well-annotated dataset and a large but unannotated dataset for a substantial performance boost.
Using Different Sections of the Report
We investigate the effectiveness of using different sections for the embedding alignment task. All models in Figure 3 run with a supervision fraction of 1%. The models trained on the findings section outperformed the models trained on the impression section using cosine similarity and . This makes sense from a clinical perspective since the radiologists usually only describe image patterns in the findings section and thus they would be aligned well. On the other hand, they make radiological-clinical integrated interpretations in the impression section, which means that the both the image-uncorrelated clinical history and findings were mentioned in the impression section. Since is calculated using ICD-9 codes, which carry disease-related information, it naturally aligns with the purpose of writing an impression section. This may explain why the models trained on impression section worked better for .
MIMIC-CXR will soon be the largest publicly available imaging dataset consisting of both medical images and paired radiological reports, promising myriad applications that can make use of both modalities together. We establish baseline results using supervised and unsupervised joint embedding methods along with local (direct pairs) and global (ICD-9 code groupings) retrieval evaluation metrics. Results show a possibility of incorporating more unsupervised data into training for minimal-effort performance increase. A further study of joint embeddings between these modalities may enable significant applications, such as text/image generation or the incorporation of other EMR modalities.
-  Steven Bird and Edward Loper. NLTK: the natural language toolkit. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 31. Association for Computational Linguistics, 2004.
-  Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
-  Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. Universal sentence encoder. arXiv preprint arXiv:1803.11175, 2018.
Wei-Yu Chen, Tzu-Ming Harry Hsu, Yao-Hung Hubert Tsai, Yu-Chiang Frank Wang,
and Ming-Syan Chen.
Transfer neural trees for heterogeneous domain adaptation.
European Conference on Computer Vision, pages 399–414. Springer, 2016.
-  Yu-An Chung, Wei-Hung Weng, Schrasing Tong, and James Glass. Unsupervised cross-modal alignment of speech and text embedding spaces. arXiv preprint arXiv:1805.07467, 2018.
-  Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. arXiv preprint arXiv:1710.04087, 2017.
-  EnvoyAI. EnvoyAI launches with 35 algorithms contributed by 14 newly-contracted artificial intelligence development partners. EnvoyAI Blog, 11 2017.
-  Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639):115, 2017.
-  William Gale, Luke Oakden-Rayner, Gustavo Carneiro, Andrew P Bradley, and Lyle J Palmer. Producing radiologist-quality reports for interpretable artificial intelligence. arXiv preprint arXiv:1806.00340, 2018.
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo
Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky.
Domain-adversarial training of neural networks.
The Journal of Machine Learning Research, 17(1):2096–2030, 2016.
-  Edouard Grave, Armand Joulin, and Quentin Berthet. Unsupervised alignment of embeddings with wasserstein procrustes. arXiv preprint arXiv:1805.11222, 2018.
Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam
Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge
Cuadros, et al.
Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs.JAMA, 316(22):2402–2410, 2016.
Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III.
Deep unordered composition rivals syntactic methods for text
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1681–1691, 2015.
-  Kalervo Järvelin and Jaana Kekäläinen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446, 2002.
-  Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In Advances in Neural Information Processing Systems, pages 971–980, 2017.
-  Jeff Lagasse. FDA approves first AI tool for detecting retinopathy, NIH shows machine learning success in imaging | Healthcare Finance News. Healthcare Finance, 4 2018.
-  Peter J Liu. Learning to write notes in electronic health records. arXiv preprint arXiv:1808.02622, 2018.
-  Bernard Marr. First FDA approval for clinical cloud-based deep learning in healthcare. Forbes, 2017.
-  Noam Mor, Lior Wolf, Adam Polyak, and Yaniv Taigman. A universal music translation network. arXiv preprint arXiv:1805.07848, 2018.
-  Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210, 2011.
-  Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
-  Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. CheXnet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017.
Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele.
Learning deep representations of fine-grained visual descriptions.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 49–58, 2016.
-  Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.
-  Jonathan Rubin, Deepan Sanghavi, Claire Zhao, Kathy Lee, Ashequl Qadir, and Minnan Xu-Wilson. Large scale automated reading of frontal and lateral chest x-rays using dual convolutional neural networks. arXiv preprint arXiv:1804.07839, 2018.
-  Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non-parallel text by cross-alignment. In Advances in Neural Information Processing Systems, pages 6830–6841, 2017.
-  Hoo-Chang Shin, Kirk Roberts, Le Lu, Dina Demner-Fushman, Jianhua Yao, and Ronald M Summers. Learning to read chest X-rays: Recurrent neural cascade model for automated image annotation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2497–2506, 2016.
-  Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859, 2017.
-  Michael Walter. Densitas gains FDA clearance for machine learning software that assesses breast density. Radiology Business, 4 2018.
-  Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 3462–3471. IEEE, 2017.
-  Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, and Ronald M Summers. TieNet: Text-image embedding network for common thorax disease classification and reporting in chest X-rays. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9049–9058, 2018.
-  Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, 2015.