Unsupervised Data Imputation via Variational Inference of Deep Subspaces

03/08/2019 ∙ by Adrian V. Dalca, et al. ∙ MIT cornell university 11

A wide range of systems exhibit high dimensional incomplete data. Accurate estimation of the missing data is often desired, and is crucial for many downstream analyses. Many state-of-the-art recovery methods involve supervised learning using datasets containing full observations. In contrast, we focus on unsupervised estimation of missing image data, where no full observations are available - a common situation in practice. Unsupervised imputation methods for images often employ a simple linear subspace to capture correlations between data dimensions, omitting more complex relationships. In this work, we introduce a general probabilistic model that describes sparse high dimensional imaging data as being generated by a deep non-linear embedding. We derive a learning algorithm using a variational approximation based on convolutional neural networks and discuss its relationship to linear imputation models, the variational auto encoder, and deep image priors. We introduce sparsity-aware network building blocks that explicitly model observed and missing data. We analyze proposed sparsity-aware network building blocks, evaluate our method on public domain imaging datasets, and conclude by showing that our method enables imputation in an important real-world problem involving medical images. The code is freely available as part of the |neuron| library at http://github.com/adalca/neuron.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 7

page 8

page 9

Code Repositories

neurite

Neural networks toolbox focused on medical image analysis


view repo

SynthSeg

Contrast-agnostic segmentation of MRI scans


view repo

SynthSR

A framework for joint super-resolution and image synthesis, without requiring real training data


view repo

neuron

Neural networks toolbox focused on medical image analysis


view repo

lab2im

Library for generating images by sampling a GMM conditioned on label maps


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Highly incomplete data are found in a wide variety of domains. Sensor failure, occlusions, or sparsity by design all lead to missing data. For example, LIDAR scan data, providing depth measurements in a variety of problems, yield sparse point clouds [5, 28, 47]. Our work is motivated by a challenging real world medical imaging problem. In many clinical settings, medical image scanning time is limited by cost and physical or patient care constraints, leading to severely under-sampled images [8, 9, 43]. For example, in many clinical settings, only every sixth 2D slice is acquired in a 3D MRI scan, resulting in 83% of the anatomical data being missing. Estimating the missing data can yield meaningful clinical insight and help with downstream tasks such as registration and segmentation.

State of the art methods for imputation, or estimation of missing values, often use statistics learned across the entire dataset. Supervised methods that fill in missing values rely on datasets of full observations to learn relationships between present and missing data [2, 12, 14, 22, 52, 53]

. For example, super-resolution methods which can be viewed as imputation of higher-resolution pixel data require high resolution data to learn image statistics 

[12, 14, 22]

. In addition, super-resolution methods operate on a regular grid which is not applicable in many missing data settings. Image inpainting methods exploit statistical structure learned from images to impute a missing region, but often require densely observed regions of the existing images to learn image statistics 

[2, 52, 53]. Other subspace methods require fully observed data to learn a low-dimensional data representation, and then use these representations to impute missing information in sparsely observed data.

Figure 1: Preview of unsupervised imputation for MNIST and (crop of) brain MRI slice. We leverage a collection of images with missing pixels but common structure to successfully impute missing data in the presence of extreme sparsity.

However, in many problems, fully observed data is unavailable or difficult to acquire. In this work, we focus on the recovery of missing image data in an unsupervised

setting where no full observations are available, but some common structure exists across the dataset. Unsupervised statistical imputation methods, spanning the literature from dictionary learning, factor analysis, manifold learning, and Principal Component Analysis (PCA) and its variants, often use linear subspace models to capture covariation across a dataset 

[3, 32]. These models assume each observed data point is a noisy, sparse observation of a lower dimensional linear subspace. They have been succesfully applied in areas such as network traffic flows [25] or collaborative filtering [45]

, where the high dimensional data can often be well represented by a linear embedding. However, in many settings such as imaging, linear models are insufficient in capturing data representations 

[27, 40, 41].

In this work, we propose a probabilistic model that frames high dimensional imaging data with very sparse pixel observations as being generated from a non-linear subspace. Starting from this model, we derive a variational learning strategy that employs developments in deep neural networks and variational inference. We introduce building blocks of sparsity-aware deep architectures, and present a data inference method, based on this learning model, that enables both conditional mean imputation and multiple imputation of missing data. We discuss connections and important differences with linear imputation methods, which can be viewed as linear instantiations of our model, with the variational auto encoder, and with deep image priors. We evaluate this approach on unsupervised imputation in several public datasets comparing several model variants and linear alternatives. We analyze individual building blocks, and show that our method enables imputation of medical images in a real-world problem. Finally, we provide a comparative analysis with deep image priors.

2 Related Works

Collaborative filtering systems include only a sparse observation of users preferences [30, 45]. Here, methods aim to learn from user preference to produce future recommendations. Often, these models build user representations using matrix completion methods, which share a goal with data imputation using linear embeddings. Recent methods have exploited convolutional neural networks for joint user representations with external information regarding content [29]

. Other methods use shallow auto-encoders with sparse data and propose specific regularized loss functions 

[44, 46]. Similar to linear subspace models, these methods can be characterized as instantiations of our model.

Variational Bayes auto-encoders (VAEs) and similar models have been used to learn probabilistic generative models, often in the context of images [23, 24, 42]. Similarly, deep denoising auto-encoders use neural networks to obtain embeddings that are robust to noise [49]. Our method builds on these recent developments to approximate subspaces using neural networks. Importantly, we show that a principled treatment of a sparsity model leads to important and intuitive differences from the VAE.

Deep Image Priors (DIP) use a generative neural network as a structural prior, and can be used to synthesize missing data [48]. For each image independently, the method finds network parameters that best explain the observed pixels. However, as parameters are image specific, this method is not amenable to extreme sparsity where image structure is hard to learn from the few observations of that image. Below, we discuss how our method is similar to DIP, how it differs, and perform a comparison in our experiments.

Several methods define sparse neural networks in other contexts that are not directly related to our task, but still share nomenclature. For example, spatially-sparse CNNs assume a fully observed input, but the content itself is sparse, such as thin writing on a black background [16, 17]. Faster sparse convolutions are proposed by explicitly operating on the pixels that represent content, with the focus of efficient computation. Other methods propose sparsity of the parameter space in neural networks to improve various metrics of network efficiency [18, 33, 50].

During the development of this work, several contemporaneous works have been shown to tackle related problems. Partial convolutions [34] have been developed to tackle image inpainting, where parts of the desired images are fully observed. A recent method uses adversarial training to guide a generator network to impute missing data, and introduces a discriminator hint mechanism to enable training [54]. Within the medical image analysis domain, a recent method takes advantage of the similarity of local structure between different acquisition directions to enable subject-specific supervised training and imputation [55]. Outside of imaging-specific methods, recent papers have also shown similar development based on deep generative models in for imputing tabular and time series data [4, 39].

3 Methods

In this section we first present a general generative probabilistic model for sparse data observations using a non-linear subspace, and describe the imputation procedure using this model. We then show a learning algorithm that employs a variational approximation using neural networks, and introduce sparsity-aware neural network building blocks that explicitly model observed and missing data. Figure 2 illustrates an overview of the method.

3.1 Model

We let 

denote an image written as a vector of size 

which we model as a high dimensional manifestation of a low-dimensional representation  of length :

(1)

where 

denotes the multivariate normal distribution with mean 

and covariance  denotes the identity matrix, and  is a (potentially non-linear) function parametrized by 

that maps data from the low dimensional subspace to a full observation. The variance parameter 

captures independent pixel noise. We adopt a Gaussian prior for the low dimensional representation:

(2)

We let  indicate the set of observed entries in data , and  the corresponding vector of observed values. The set  varies for each datapoint and is assumed to be small (representing high sparsity). The likelihood of an entire observed dataset , where  are observed data points, is therefore:

Figure 2: Overview Left: graphical representation for our probabilistic model. Right: resulting neural network schematic, highlighting the encoder using sparsity-aware layers, imputation via the decoder, and the sparsity-aware loss function.

3.2 Imputation

We aim to infer the full data point  given a sparse observation  using the posterior :

(3)

where we used Jensen’s inequality in   and model (1) in the last line. To impute data, we use maximum-a-posteriori (MAP) estimation, and approximate it via the lower bound

(4)

Statistical imputation of the form (4) is referred to as conditional mean imputation, which can underestimate the variance of downstream tasks [32]. It is sometimes desirable to be able to sample the posterior itself thus producing multiple imputations that give an indication of the variance captured by the model. Our method enables this process by sampling from the posterior approximation , followed by

(5)

This process projects a sparse data point  to a plausible representation , and then estimates a possible data point .

3.3 Learning

Unfortunately, computing the expectation (4) or sampling from 

is intractable. Building on recent methods in variational inference such as the VAE, we employ an approximate variational posterior probability 

parametrized by , and minimize the KL divergence with the true posterior [20, 21, 23, 24]:

(6)

which is the negative of the variational lower bound of the model evidence [24]. We model the approximate posterior as a multivariate normal:

(7)

where  is diagonal. This approximation enables efficient sampling, facilitating imputation using

(8)

where  are samples from .

Figure 3: Schematic examples of sparsity-aware convolution and linear layers, using the sparsity masks to correct conventional layer equivalents.

3.4 Learning via sparsity aware neural networks

The functions  take only the observed entries of  and compute the subspace statistics. We estimate these using a neural network , parameterized by . We similarly approximate the generating function  by a neural network , parameterized by . We jointly learn the parameters  by optimizing the variational lower bound (6) using stochastic gradient methods. Specifically, for each data point  with observed data , the resulting loss is:

(9)

where  are samples from , and  is the number of samples we draw for each input image. The first term encourages the observed entries of  to be well recovered by the decoder. The second term encourages the subspace posterior  to be close to the prior . Since this posterior depends on only unobserved entries, below we introduce several sparsity-aware building blocks for neural networks that handle sparse inputs (Figure 3.

Fully Connected

Fully Connected (or dense) layers enable learning of broad correlations across pixels of a datapoint, and are extensively used in both imaging and non-imaging data. In general, a fully connected layer can be written as , where is the output (response), is the input, is the weight matrix and is a bias term. When the input is partially observed as , a naive computation of the output might involve the columns of that correspond to the observed entries, , via . This is equivalent to filling in the missing values with zeros. This approach, however, does not account for or exploit possible dependencies between entries of . We propose an alternative strategy based on linear models for imputation [31], where we adopt a linear model . Given observed , the response can be computed as:

(10)

where are the rows of that correspond to the observed indices. We propose to use this formulation as a sparsity-aware fully connected layer, where  and  are now the layer parameters. We discuss the linear formulation to imputation, which motivated this layer, in the Subsection Comparison with Linear Subspace Models. In our experiments, we demonstrate that this layer more accurately computes linear projections of sparse data and leads to improved training compared to a traditional fully connected layer.

Convolution

For imaging data, hierarchical convolutional operations extract meaningful representations. Existing methods often fill in missing values with a constant, such as zero or the mean value at that pixel across the dataset, but these do not usually accurately estimate the existing image signal [47].

Given a sparse input image, we experiment with two convolutional approaches. In the first, we apply a weighted convolution, where the convolution kernel  is modified to vary with image location  by the binary observed mask  [47]. The new filter response  at location  is

(11)

where  indicates all the pixels neighboring  within some filter kernel size. This weighted filter uses only the existing information in computing a response. We then compute the mask to be used at a subsequent layer :

(12)

Since convolutional layers can be applied hierarchically, even very sparse data will often lead to a dense deep feature response. This sort of convolution was recently used to impute LIDAR depth data in a supervised context 

[47].

Since we focus on image data, linear interpolation provides a rough approximation for missing pixels. Therefore, in a second strategy, we first approximate the missing data with linear interpolation, and provide the first convolution layer with both the observation mask and the interpolated image, as two input channels.

3.4.1 Networks

Different architecture families are appropriate for different problems and datatypes. We focus on architectural design decisions that pertain to utilizing missing data and account for the image content type, rather than describing specific details about particular architectures.

  • [leftmargin=1em]

  • Hierarchical convolutional layers are used to extract image-space features. We experiment with (sparsity-aware) fully convolutional encoder and decoders.

  • In many domains, capturing covariation across a large image can provide useful structural information. We therefore explore encoders that use a (sparsity aware) fully connected layer following several convolutions, and a decoder that uses a fully connected layer followed by convolutions.

  • For large data, such as volumetric medical scans, both convolutional and fully connected layers capture important relationships, but the volumes are too large to employ the designs above. In these cases, we propose an architecture that replaces the fully connected layer by locally connected sparse layers, which affect separate subregions of the volume.

3.5 Connection to Other Models

3.5.1 Linear Subspace Methods

We show that linear subspace methods are a specific case of our model. Let , where weight matrix  controls the model covariance and  is the data mean. This yields the sparse model

(13)

where  selects the rows of  that correspond to observed entries in . Using the proposed learning strategy (9) and the pseudo-inverse of , we can choose the approximating posterior,

(14)

where . Given learned parameters, we impute missing pixels using the expectation (4)

(15)

Intuitively, computing  linearly projects the data point onto the low-dimensional subspace using only the observed pixels. The recovered data point is then computed by linearly projecting the estimated low-dimensional representation into a high-dimensional space. The parameters 

can be learned using the expectation maximization algorithm 

[32]. Extended linear subspace models, such as ones that regularize the weights  or include mixtures of Gaussians, can similarly be seen as instantiations of our model. We used these concepts in proposing novel sparsity-aware fully connected (or dense) neural network layers above.

Figure 4: Analysis of proposed sparse fully connected layer. (a) and (b): subspace reconstruction error for three methods that use standard fully connected layer and our method, for the original images and images with permutted pixels. (c): linear auto-encoder convergence of sparse reconstruction loss on validation set. (d): linear auto-encoders full image reconstruction error. The proposed fully connected layer improves subspace estimation as well as the missing pixel values.

3.5.2 Variational Auto Encoder

Our model and resulting learning strategy relate to the variational auto-encoder and other recent methods using approximations based on neural networks [23, 24, 42]. A key difference is that our generative probabilistic model explicitly captures missing data. Our focus is on describing how this aspect changes generative models, providing a principled derivation of the learning strategy in the presence of missing data, and introducing missing data aware network layers. Specifically, in (9) the posterior  depends on only the observed entries of , leading to our network building blocks. The reconstruction term  is evaluated at only the observed voxels, enabling parameter learning in unsupervised settings. In our experiments, we investigate how different parts of these methodological differences affect imputation.

3.5.3 Deep Image Priors and Amortized Inference

Deep Image Priors [48] use a generative neural network  as a prior for image , such that . Parameters  are optimized separately for each image. In the context of sparse data, this method can be used to synthesize missing pixels by first obtaining  for some fixed , and then computing the full image . This strategy requires enough observed pixels in each image  to be able to infer image structure and thus the missing data, and has been demonstrated in the case of large natural images with half of the pixels missing.

In contrast, we focus on severe sparsity in each image, and leverage commonality across a dataset rather than the observed pixels in a single image. Our method learns a similar decoder network  to be able to decode all images in a dataset, rather than learning an image-specific generative network. In this sense, our decoding model can be seen as a collection-wide global version of the Deep Image Prior, where the embedding , estimated by the encoder  specifies the instance to be recovered.

In addition, our model can be seen as amortized inference over a collection of images with missing pixels. The general decoder learned by our model, together with an image-specific embedding , act as Deep Image Prior for the entire collection of images with common structure.

3.6 Implementation

We implement our method as part of the neuron package [10], which is available at http://github.com/adalca/neuron

, using Keras 

[6]

with a Tensorflow 

[1] backend.

4 Experiments

We provide a series of experiments evaluating the presented method and its utility in the context of unsupervised sparse datasets.

We first analyze the proposed sparse fully connected layer to illustrate the improvement over traditional strategies in the sparse setting. We then evaluate several models on different image datasets to shed light on how various approaches perform in different settings. We focus on comparing image imputation using linear subspaces with several variants of our non-linear subspace model. Our goal is to demonstrate the potential of straightforward use of our method, rather than proposing an optimized architecture for a specific task. We then show the utility of our model on a clinical image imputation task. Finally, we analyze our algorithm compared to Deep Image Priors.

Figure 5: Imputation result. Left: Errors for reconstruction of the test ground truth for various datasets, using each imputation method. Datasets are shortened: MNIST to M, FASHION-MNIST to FM, and MRI Patches to P. The last number for each dataset name indicates the imposed sparsity (e.g. 75% means 75% percent of the pixels are missing). Right: Errors for reconstruction of the test ground truth for coronal MRI Slices. With the occasional exception of the simplistic conv-zero strategy, our model variants consistently yield improved results compared to the baselines.

4.1 Data

We use three datasets in our experiments. First, the MNIST dataset consists of small (28x28 pixels) 2D images of hand-written digits [26]. We create three variants: MNIST-2 which only consists of the digit 2, MNIST-all which refers to the original dataset, and MNIST-rot which contains all of the digits rotated at a random angle between 0 and 360 degrees. Second, we use the FASHION-MNIST dataset, which consists of 28x28 pixel images of 10 types of clothing items [51]. These have more structure in each image compared to MNIST digits. Finally, we use a large-scale, multi-site dataset of 7829 T1-weighted brain MRI scans, compiled from eight publicly available datasets: ADNI [38], OASIS [35], ABIDE [11], ADHD200 [37], MCIC [15], PPMI [36], HABS [7], and Harvard GSP [19]. Acquisition details, subject age ranges and health conditions are different for each dataset. We performed standard pre-processing steps on all scans, including resampling to mm isotropic voxels, and affine spatial normalization for each scan using FreeSurfer [13]. We crop the final images to .

In our experiments, we simulate the patterns of missing pixels, which we then remove from the images. We split each dataset into 50%, 30%, and 20% for train, validation, and test sets respectively, all of which are sparse. The test set in each dataset is only evaluated once. We highlight, however, that our focus is on evaluating the models on the unsupervised task of imputing missing pixels. Below, 90% sparsity means 90% of the data is missing.

Figure 6: Example Imputation results. Each row is a separate dataset. The first three columns show the original ground truth image, the linear interpolation and the random sampling mask. The last seven columns illustrate a linear subspace model, and six variants of our method involving sparse fully connected layers. Other model variants show perceptually closer results to the true (unobserved) data.

4.2 Fully Connected Layer

We first analyze the importance of the proposed sparse fully-connected layer using the FASHION-MNIST dataset (the other datasets result in comparable results). We compare several strategies for using fully connected layers with missing data, before activation and omitting the bias term.

We compute the linear projection of fully-observed images onto a low-dimensional subspace of size  with weights initialized using PCA, and treat this as the ground truth projection. We then randomly sub-sample the data at several sparsity levels. We compute several baselines that use a conventional fully connected layer: filling the missing image pixels with 1) zeros, 2) the average value at each pixel location, or 3) linearly interpolated values. Finally, we test our sparsity-aware fully connected layer.

We compare the sparse image projections of each method with the ground truth response using mean squared error. We also consider a setting where spatial consistency is missing, often found in other domains, by randomly permuting image pixels and repeating the experiment.

Figure 7: Example of MRI slice imputation result. For each subject scans, the top shows the overall slice, and the bottom zooms in on a portion of the image. The first three columns show the original ground truth image, the linear interpolation and the random sampling mask. The last seven columns illustrate a linear subspace model, and six variants of our method involving sparse fully connected layers. All proposed models show perceptually similar results to the true (unobserved) data.

Figure 4(a,b) shows that using the sparsity-aware fully connected layer leads to a significantly improved projection followed by spatial interpolation. The random permutation dramatically affects the spatial interpolation baseline, but does not affect the other methods since they treat dimensions independently.

In a second experiment, we learn a linear auto-encoder that uses the linear encoding methods described above, and a fully connected decoding layer. Figure 4(c,d) reports the validation loss (on observed pixels only), and final validation reconstruction error (on all pixels). The results show that using sparsity-aware fully connected layers leads to improved reconstructions, promising to improve network performance in the context of sparse input data.

4.3 Random Sparsity in Image Datasets

In this section, we simulate random missing pixels for sparsity factors of 75% and 90% in images from MNIST-2, MNIST-all, MNIST-rot, and FASHION-MNIST. In addition, to evaluate more complex covariance structure in images, we obtain random 2D 64x64 patches from the brain MRI dataset.

4.3.1 Baselines

In this section, our first baseline is the linear subspace model of (13) which is often used in sparse settings [32]

. We implement both stochastic gradient descent (SGD) based optimization, as well as Expectation Maximization  

[32]. For small datasets and models we find the two implementations to behave comparably, but the expectation maximization becomes intractable for large datasets and models. We therefore report results from the SGD implementation. In addition, we implement a (linear) sparse dictionary learning method using a L0-norm to encourage a sparse latent representation.

4.3.2 Model Variants

We evaluate several variants of our method using sparsity-aware implementations, from simpler filling strategies to our full model:

  • [leftmargin=1em]

  • conv-zero fills missing image pixels with zeros and uses a normal convolutional encoder and the sparse loss (9)

  • conv-mean fills missing image pixels with the pixel-wise dataset mean and uses a normal convolutional encoder and the sparse loss

  • conv-interp linearly interpolates the missing pixels of an input image and uses a normal convolutional encoder and the sparse loss

  • conv-interp-wmask adds the incomplete data sampling mask as a model input channel to conv-interp

  • conv-sparse

    uses a sparsity-aware, fully convolutional encoder and decoder architecture, each consisting of five residual blocks of two sparse convolutions with 3x3 kernels, ReLu activations and a 2x2 max-pooling or upsampling layer, and uses the sparse loss

  • conv-FC-sparse adds a sparsity-aware fully connected layer to the end of the encoder and the start of the decoder, enabling covariances across the entire image. We use an encoding of size 10 for MNIST and FASHION-MNIST, and 100 for the MRI patches.

Figure 8: Deep Image Prior Comparison. Comparison with Deep Image Priors Top: Example instances for the MNIST-2, FASHION-MNIST-1, and sparse MRI datasets. The sparsity of the MRI images is determined by the slice spacing. Bottom: MSE plots for each of dataset. Our method (orange) has consistently better reconstruction by leveraging the entire dataset of images.

4.3.3 Results

Figures 5 and 6 illustrate the results. For the single digit, all methods, including the linear subspace are able to exploit covariation across the dataset, despite significant sparsity. However, as more variability is present in the full and rotated datasets, the linear subspace methods are unable to capture the image structure and properly impute the images. In contrast, non-linear subspace models are able to capture complex covariances, even in extreme sparsity situations. Using a sparsity-driven VAE after filling the missing pixels using linear-interpolation or mean-filling performs better than the linear models, and having explicit sparsity-aware convolutional encoder performs the best among all variants.

In general, several design decisions are possible for a given method, including varying encoding size, noise parameters and architecture designs. Here, our goal is to illustrate that in various datasets and sparsity levels, our method promises to dramatically improve imputation by employing a non-linear subspace.

4.4 Brain MRI Slices

We demonstrate the utility of our method on sparse brain MRI acquisition. In many clinical settings, scanning time is limited, leading to severely under-sampled medical scans. For example, in many clinical settings, only every sixth 2D slice is acquired in the 3D MRI scan. Moreover, because of the variability in subject head positioning in the scanner, the sparsity patterns appear to have different angles. In this experiment we evaluate our method by using high resolution MRI data which we downsample to simulate the sparse acquisition protocol. Specifically, we simulate a sparse scan for each subject by removing five out of every six slices at an arbitrarily rotated angle, then rotate the subject back to the common reference frame. We extract the middle coronal slice of each subject, perpendicular to the acquisition direction, and carry out 2D imputation for the middle coronal slice. We evaluate the methods described in the previous experiment. Because of the size of the images and dataset, we use a subspace encoding of 200 for the conv-FC model.

Figure 5 (right) summarizes reconstruction results across the entire test set, and Figure 7 illustrates example imputation of MRI slices. The spatial linear interpolation image introduces several artifacts. The reconstructions using the linear subspace is unable to reconstruct detailed anatomy. In contrast, the proposed variants achieve significantly better reconstruction, demonstrating the promise of using the proposed methods on clinical MR images.

4.5 Evaluation with Deep Image Priors

We evaluate our model, focusing on the conv-sparse variant, compared to the deep image priors (DIP) [48]. We applied both methods to sparse data, as described in Section 3.5. For a direct comparison, we use the same decoder architecture, described in our model variants, for both the proposed model and DIP. We use the MNIST-2 and FASHION-MNIST-1 datasets in various high sparsity scenarios, as well as the sparse brain MRI scans used in the previous section. Figure 8 summarizes the results. As the sparsity level increases (fewer pixels observed), the Deep Image Prior strategy lacks sufficient data in a single image to synthesize a reasonable image. In contrast, our method is able to leverage the entire dataset of sparse images with common structure to learn covariation within pixels, enabling better imputation of missing pixels. Our method requires 18.6 13.7 ms (since we only need to evaluate the network) to reconstruct an MRI brain slice. Deep Image Priors require learning network parameters, which took 36.5 0.2 seconds for images in our dataset111We report the runtime of Deep Image Prior for 1000 iterations, which we found to be necessary for good results.

5 Conclusion

In this paper, we introduce a general probabilistic model to characterize sparse high dimensional imaging data by a deep non-linear subspace. We provide a principled derivation of the learning strategy in the presence of missing data, and introduce missing data aware network building blocks. We describe how missing data changes existing subspace models like the VAE, and how our model can be seen as a global version of deep image priors. Given a learned non-linear model, we describe how imputation can be achieved.

In our experiments, we demonstrate the importance of a novel sparsity-aware fully connected layer. We then show that our model exploits intra-image structure, as well as structure across a dataset, to yield superior imputation to fully convolutional architectures. Finally, we show the utility of our method using a real-world problem involving medical images.

References

  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
  • [2] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. In Proc. of 27th annual conference on Computer graphics and interactive techniques, pages 417–424. ACM Press, 2000.
  • [3] C. M. Bishop.

    Neural networks for pattern recognition

    .
    Oxford university press, 1995.
  • [4] Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu. Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8(1):6085, 2018.
  • [5] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia. Multi-view 3d object detection network for autonomous driving. In IEEE CVPR, volume 1, 2017.
  • [6] F. Chollet. Keras. https://github.com/fchollet/keras, 2015.
  • [7] A. Dagley, M. LaPoint, W. Huijbers, T. Hedden, D. G. McLaren, J. P. Chatwal, K. V. Papp, R. E. Amariglio, D. Blacker, D. M. Rentz, et al. Harvard aging brain study: dataset and accessibility. NeuroImage, 2015.
  • [8] A. V. Dalca, K. Bouman, W. Freeman, M. Sabuncu, N. Rost, and P. Golland. Medical image imputation from image collections. IEEE TMI: Transactions on Medical Imaging, 38(2):504–514, 2019.
  • [9] A. V. Dalca, K. L. Bouman, W. T. Freeman, N. S. Rost, M. R. Sabuncu, and P. Golland. Population based image imputation. In International Conference on Information Processing in Medical Imaging, pages 659–671. Springer, 2017.
  • [10] A. V. Dalca, J. Guttag, and M. R. Sabuncu. Anatomical priors in convolutional networks for unsupervised biomedical segmentation. pages 9290–9299, 2018.
  • [11] A. Di Martino et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Molecular psychiatry, 19(6):659–667, 2014.
  • [12] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2):295–307, 2016.
  • [13] B. Fischl. Freesurfer. Neuroimage, 62(2):774–781, 2012.
  • [14] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based super-resolution. IEEE Computer graphics and Applications, 22(2):56–65, 2002.
  • [15] R. L. Gollub, J. M. Shoemaker, M. D. King, T. White, S. Ehrlich, S. R. Sponheim, V. P. Clark, J. A. Turner, B. A. Mueller, V. Magnotta, et al. The MCIC collection: a shared repository of multi-modal, multi-site brain image data from a clinical investigation of schizophrenia. Neuroinformatics, 11(3):367–388, 2013.
  • [16] B. Graham. Spatially-sparse convolutional neural networks. arXiv preprint arXiv:1409.6070, 2014.
  • [17] B. Graham and L. van der Maaten. Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307, 2017.
  • [18] S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135–1143, 2015.
  • [19] A. J. Holmes, M. O. Hollinshead, T. M. O’Keefe, V. I. Petrov, G. R. Fariello, L. L. Wald, B. Fischl, B. R. Rosen, R. W. Mair, J. L. Roffman, et al. Brain genomics superstruct project initial data release with structural, functional, and behavioral measures. Scientific data, 2, 2015.
  • [20] T. S. Jaakkola and M. I. Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10(1):25–37, 2000.
  • [21] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999.
  • [22] J. Kim, J. Kwon Lee, and K. Mu Lee. Accurate image super-resolution using very deep convolutional networks. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 1646–1654, 2016.
  • [23] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581–3589, 2014.
  • [24] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
  • [25] A. Lakhina, M. Crovella, and C. Diot. Diagnosing network-wide traffic anomalies. In ACM SIGCOMM Computer Communication Review, volume 34, pages 219–230. ACM, 2004.
  • [26] Y. LeCun.

    The mnist database of handwritten digits.

    http://yann. lecun. com/exdb/mnist/, 1998.
  • [27] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng.

    Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.

    In Proceedings of the 26th annual international conference on machine learning, pages 609–616. ACM, 2009.
  • [28] B. Li, T. Zhang, and T. Xia. Vehicle detection from 3d lidar using fully convolutional network. arXiv preprint arXiv:1608.07916, 2016.
  • [29] X. Li and J. She.

    Collaborative variational autoencoder for recommender systems.

    In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 305–314. ACM, 2017.
  • [30] G. Linden, B. Smith, and J. York. Amazon. com recommendations: Item-to-item collaborative filtering. IEEE Internet computing, 7(1):76–80, 2003.
  • [31] R. J. Little and D. B. Rubin. The analysis of social science data with missing values. Sociological Methods & Research, 18(2-3):292–326, 1989.
  • [32] R. J. Little and D. B. Rubin. Statistical analysis with missing data, volume 333. John Wiley & Sons, 2014.
  • [33] B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Pensky. Sparse convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 806–814, 2015.
  • [34] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 85–100, 2018.
  • [35] D. S. Marcus, T. H. Wang, J. Parker, J. G. Csernansky, J. C. Morris, and R. L. Buckner. Open access series of imaging studies (oasis): cross-sectional mri data in young, middle aged, nondemented, and demented older adults. Journal of cognitive neuroscience, 19(9):1498–1507, 2007.
  • [36] K. Marek, D. Jennings, S. Lasch, A. Siderowf, C. Tanner, T. Simuni, C. Coffey, K. Kieburtz, E. Flagg, S. Chowdhury, et al. The parkinson progression marker initiative (ppmi). Progress in neurobiology, 95(4):629–635, 2011.
  • [37] M. P. Milham, D. Fair, M. Mennes, S. H. Mostofsky, et al. The ADHD-200 consortium: a model to advance the translational potential of neuroimaging in clinical neuroscience. Frontiers in systems neuroscience, 6:62, 2012.
  • [38] S. G. Mueller, M. W. Weiner, L. J. Thal, R. C. Petersen, C. R. Jack, W. Jagust, J. Q. Trojanowski, A. W. Toga, and L. Beckett. Ways toward an early diagnosis in Alzheimer’s disease: the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Alzheimer’s & Dementia, 1(1):55–66, 2005.
  • [39] A. Nazabal, P. M. Olmos, Z. Ghahramani, and I. Valera. Handling incomplete heterogeneous data using vaes. arXiv preprint arXiv:1807.03653, 2018.
  • [40] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In Computer Vision and Pattern Recognition, pages 1717–1724. IEEE, 2014.
  • [41] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • [42] R. Ranganath, S. Gerrish, and D. Blei. Black box variational inference. In Artificial Intelligence and Statistics, pages 814–822, 2014.
  • [43] F. Rousseau. A non-local approach for image super-resolution using intermodality priors. Medical image analysis, 14(4):594–605, 2010.
  • [44] R. Salakhutdinov, A. Mnih, and G. Hinton. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pages 791–798. ACM, 2007.
  • [45] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web, pages 285–295. ACM, 2001.
  • [46] S. Sedhain, A. K. Menon, S. Sanner, and L. Xie. Autorec: Autoencoders meet collaborative filtering. In Proceedings of the 24th International Conference on World Wide Web, pages 111–112. ACM, 2015.
  • [47] J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger. Sparsity invariant cnns. arXiv preprint arXiv:1708.06500, 2017.
  • [48] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Deep image prior. CVPR, 2018.
  • [49] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol.

    Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.

    Journal of Machine Learning Research, 11(Dec):3371–3408, 2010.
  • [50] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074–2082, 2016.
  • [51] H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. 2017.
  • [52] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In Advances in neural information processing systems, pages 341–349, 2012.
  • [53] R. Yeh, C. Chen, T. Y. Lim, M. Hasegawa-Johnson, and M. N. Do. Semantic image inpainting with perceptual and contextual losses. arXiv preprint arXiv:1607.07539, 2016.
  • [54] J. Yoon, J. Jordon, and M. van der Schaar. Gain: Missing data imputation using generative adversarial nets. arXiv preprint arXiv:1806.02920, 2018.
  • [55] C. Zhao, A. Carass, B. E. Dewey, J. Woo, J. Oh, P. A. Calabresi, D. S. Reich, P. Sati, D. L. Pham, and J. L. Prince.

    A deep learning based anti-aliasing self super-resolution algorithm for mri.

    In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 100–108. Springer, 2018.