Code for A Bayesian Perspective on the Deep Image Prior (CVPR 2019)
The deep image prior was recently introduced as a prior for natural images. It represents images as the output of a convolutional network with random inputs. For "inference", gradient descent is performed to adjust network parameters to make the output match observations. This approach yields good performance on a range of image reconstruction tasks. We show that the deep image prior is asymptotically equivalent to a stationary Gaussian process prior in the limit as the number of channels in each layer of the network goes to infinity, and derive the corresponding kernel. This informs a Bayesian approach to inference. We show that by conducting posterior inference using stochastic gradient Langevin we avoid the need for early stopping, which is a drawback of the current approach, and improve results for denoising and impainting tasks. We illustrate these intuitions on a number of 1D and 2D signal reconstruction tasks.READ FULL TEXT VIEW PDF
Code for A Bayesian Perspective on the Deep Image Prior (CVPR 2019)
It is well known that deep convolutional networks trained on large datasets provide a rich hierarchical representation of images. Surprisingly, several works have shown that convolutional networks with random parameters can also encode non-trivial image properties. For example, second-order statistics of filter responses of random convolutional networks are effective for style transfer and synthesis tasks 
. On small datasets, features extracted from random convolutional networks can work just as well as trained networks. Along these lines, the “deep image prior” proposed by Ulyanov et al.  showed that the output of a suitably designed convolutional network on random inputs tends to be smooth and induces a natural image prior, so that the search over natural images can be replaced by gradient descent to find network parameters and inputs to minimize a reconstruction error of the network output. Remarkably, no prior training is needed and the method operates by initializing the parameters randomly.
Our work provides a novel Bayesian view of the deep image prior. We prove that a convolutional network with random parameters operating on a stationary input, e.g
., white noise, approaches a two-dimensional Gaussian process (GP) with astationary kernel in the limit as the number of channels in each layer goes to infinity (Theorem 1). While prior work [19, 31, 18, 3, 20] has investigated the GP behavior of infinitely wide networks and convolutional networks, our work is the first to analyze the spatial covariance structure induced by a convolutional network on stationary inputs. We analytically derive the kernel as a function of the network architecture and input distribution by characterizing the effects of convolutions, non-linearities, up-sampling, down-sampling, and skip connections on the spatial covariance. These insights could inform choices of network architecture for designing 1D or 2D priors.
We then use a Bayesian perspective to address drawbacks of current estimation techniques for the deep image prior. Estimating parameters in a deep network from a single image poses a huge risk of overfitting. In prior work the authors relied on early stopping to avoid this. Bayesian inference provides a principled way to avoid overfitting by adding suitable priors over the parameters and then using posterior distributions to quantify uncertainty. However, posterior inference with deep networks is challenging. One option is to compute the posterior of the limiting GP. For small networks with enough channels, we show this closely matches the deep image prior, but is computationally expensive. Instead, we conduct posterior sampling based on stochastic gradient Langevin dynamics (SGLD), which is both theoretically well founded and computationally efficient, since it is based on standard gradient descent. We show that posterior sampling using SGLD avoids the need for early stopping and performs better than vanilla gradient descent on image denoising and inpainting tasks (see Figure 1
). It also allows us to systematically compute variances of estimates as a measure of uncertainty. We illustrate these ideas on a number of 1D and 2D reconstruction tasks.
|(a) MSE vs. Iteration||(b) Final MSE vs.||(c) Inferred mean||(d) Inferred variance|
Image priors. Our work analyzes the deep image prior  that represents an image as a convolutional network with parameters on input . Given a noisy target the denoised image is obtained by minimizing the reconstruction error over and . The approach starts from an initial value of and
drawn i.i.d. from a zero mean Gaussian distribution and optimizes the objective through gradient descent, relying on early stopping to avoid overfitting (see Figure1). Their approach showed that the prior is competitive with state-of-the-art learning-free approaches, such as BM3D 
, for image denoising, super resolution, and inpainting tasks. The prior encodes hierarchical self-similarities that dictionary-based approaches and non-local techniques such as BM3D and non-local means  exploit. The architecture of the network plays a crucial role: several layer networks were used for inpainting tasks, while those with skip connections were used for denoising. Our work shows that these networks induce priors that correspond to different smoothing “scales”.
Gaussian processes (GPs).
A Gaussian processes is an infinite collection of random variables for which any finite subset are jointly Gaussian distributed. A GP is commonly viewed as a prior over functions. Let be an index set (e.g., or ) and let be a real-valued mean function and be a non-negative definite kernel or covariance function on . If , then, for any finite number of indices
, the vectoris Gaussian distributed with mean vector and covariance matrix . GPs have a long history in spatial statistics and geostatistics 
. In ML, interest in GPs was motivated by their connections to neural networks (see below). GPs can be used for general-purpose Bayesian regression[31, 22], classification , and many other applications .
Deep networks and GPs. Neal  showed that a two-layer network converges to a Gaussian process as its width goes to infinity. Williams  provided expressions for the covariance function of networks with sigmoid and Gaussian transfer functions. Cho and Saul 
presented kernels for the ReLU and the Heaviside step non-linearities and investigated their effectiveness with kernel machines. Recently, several works[13, 18] have extended these results to deep networks and derived covariance functions for the resulting GPs. Similar analyses have also been applied to convolutional networks. Garriaga-Alonso et al.  investigated the GP behavior of convolutional networks with residual layers, while Borovykh  analyzed the covariance functions in the limit when the filter width goes to infinity. Novak et al.  evaluated the effect of pooling layers in the resulting GP. Much of this work has been applied to prediction tasks, where given a dataset , a covariance function induced by a deep network is used to estimate the posterior using standard GP machinery. In contrast, we view a convolutional network as a spatial random process over the image coordinate space and study the induced covariance structure.
Bayesian inference with deep networks. It has long been recognized that Bayesian learning of neural networks weights would be desirable [15, 19], e.g., to prevent overfitting and quantify uncertainty. Indeed, this was a motivation in the original work connecting neural networks and GPs. Performing MAP estimation with respect to a prior on the weights is computationally straightforward and corresponds to regularization. However, the computational challenges of full posterior inference are significant. Early works used MCMC  or the Laplace approximation [7, 15]
but were much slower than basic learning by backpropagation. Several variational inference (VI) approaches have been proposed over the years[12, 1, 10, 2]. Recently, dropout was shown to be a form of approximate Bayesian inference . The approach we will use is based on stochastic gradient Langevin dynamics (SGLD) , a general-purpose method to convert SGD into an MCMC sampler by adding noise to the iterates. Li et al.  describe a preconditioned SGLD method for deep networks.
Previous work focused on the covariance of (scalar-valued) network outputs for two different inputs (i.e., images). For the deep image prior, we are interested in the spatial covariance structure within each layer of a convolutional network. As a basic building block, we consider a multi-channel input image transformed through a convolutional layer, an elementwise non-linearity, and then a second convolution to yield a new multi-channel “image” , and derive the limiting distribution of a representative channel as the number of input channels and filters go to infinity. First, we derive the limiting distribution when is fixed, which mimics derivations from previous work. We then let be a stationary random process, and show how the spatial covariance structure propagates to , which is our main result. We then apply this argument inductively to analyze multi-layer networks, and also analyze other network operations such as upsampling, downsampling, etc.
For simplicity, consider an image with channels and only one spatial dimension. The derivations are essentially identical for two or more spatial dimensions. The first layer of the network has filters denoted by where and the second layer has one filter (corresponding to a single channel of the output of this layer). The output of this network is:
By linearity of expectation and independence of and ,
has a mean of zero. The central limit theorem (CLT) can be applied whenis bounded to show that approaches in distribution to a Gaussian as and is scaled as . Note that and don’t need to be Gaussian for the CLT to apply, but we will use this property to derive the covariance. This is given by
The last two steps follow from the independence of and and that is drawn from a zero mean Gaussian. Let
be the flattened tensor with elements within the window of sizeat position of . Similarly denote . Then the expectation can be written as
Williams  showed can be computed analytically for various transfer functions. For example, when , then
Here is the covariance of . Williams also derived kernels for the Gaussian transfer function . For the ReLU non-linearity, i.e., , Cho and Saul  derived the expectation as:
Thus, letting scale as and and for any input , the output of our basic convolution-nonlinearity-convolution building block converges to a Gaussian distribution with zero mean and covariance
We now consider the case when channels of are drawn i.i.d. from a stationary distribution. A signal is stationary (in the weak- or wide-sense) if the mean is position invariant and the covariance is shift invariant, i.e.,
An example of a stationary distribution is white noise where is i.i.d. from a zero mean Gaussian distribution resulting in a mean and covariance . Note that the input for the deep image prior is drawn from this distribution.
Let each channel of be drawn independently from a zero mean stationary distribution with covariance function . Then the output of a two-layer convolutional network with the sigmoid non-linearity, i.e., , converges to a zero mean stationary Gaussian process as the number of input channels and filters go to infinity sequentially. The stationary covariance is given by
The full proof is included in the Appendix and is obtained by applying the continious mapping thorem  on the formula for the sigmoid non-linearity. The theorem implies that the limiting distribution of is a stationary GP if the input is stationary.
Assume the same conditions as Theorem 1 except the non-linearity is replaced by ReLU. Then the output converges to a zero mean stationary Gaussian process with covariance
where . In terms of the angles we get the following:
This can be proved by applying the recursive formula for ReLU non-linearity . One interesting observation is that, for both non-linearities, the output covariance at a given offset only depends on the input covariance at the same offset, and on .
Two or more dimensions. The results of this section hold without modification and essentially the same proofs for inputs with channels and two or more spatial dimensions by letting , , and be vectors of indices.
So far we have shown that the output of our basic two-layer building block converges to a zero mean stationary Gaussian process as and then . Below we discuss the effect of adding more layers to the network.
Convolutional layers. A proof of GP convergence for deep networks was presented in , including the case for transfer functions that can be bounded by a linear envelope, such as ReLU. In the convolutional setting, this implies that the output converges to GP as the number of filters in each layer simultaneously goes to infinity. The covariance function can be obtained by recursively applying Theorem 1 and Lemma 1; stationarity is preserved at each layer.
Bias term. Our analysis holds when a bias term sampled from a zero-mean Gaussian is added, i.e., . In this case the GP is still zero-mean but the covariance function becomes: , which is still stationary.
Upsampling and downsampling layers. Convolutional networks have upsampling and downsampling layers to induce hierarchical representations. It is easy to see that downsampling (decimating) the signal preserves stationarity since where is the downsampling factor. Downsampling by average pooling also preserves stationarity. The resulting kernel can be obtained by applying a uniform filter corresponding to the size of the pooling window, which results in a stationary signal, followed by downsampling. However, upsampling in general does not preserve stationarity. Therrien  describes the conditions under which upsampling a signal with a linear filter maintains stationarity. In particular, the upsampling filter must be band limited, such as the filter: . If stationarity is preserved the covariance in the next layer is given by .
Skip connections. Modern convolutional networks have skip connections where outputs from two layers are added or concatenated . In both cases if and are stationary GPs so is . See  for a discussion.
Let’s revisit the deep image prior for a denoising task. Given an noisy image the deep image prior solves
where is the input and are the parameters of an appropriately chosen convolutional network. Both and
are initialized randomly from a prior distribution. Optimization is performed using stochastic gradient descent (SGD) overand (optionally is kept fixed) and relying on early stopping to avoid overfitting (see Figures 1 and 2). The denoised image is obtained as .
The inference procedure can be interpreted as a maximum likelihood estimate (MLE) under a Gaussian noise model: , where . Bayesian inference suggests we add a suitable prior over the parameters and reconstruct the image by integrating the posterior, to get The obvious computational challenge is computing this posterior average. An intermediate option is maximum a posteriori (MAP) inference where the argmax of the posterior is used. However both MLE and MAP do not capture parameter uncertainty and can overfit to the data.
In standard MCMC the integral is replaced by a sample average of a Markov chain that converges to the true posterior. However convergence with MCMC techniques is generally slower than backpropagation for deep networks. Stochastic gradient Langevin dyanamics (SGLD) provides a general framework to derive an MCMC sampler from SGD by injecting Gaussian noise to the gradient updates. Let . The SGLD update is:
where is the step size. Under suitable conditions, e.g., and and others, it can be shown that converges to the posterior distribution. The log-prior term is implemented as weight decay.
Our strategy for posterior inference with the deep image prior thus adds Gaussian noise to the gradients at each step to estimate the posterior sample averages after a “burn in” phase. As seen in Figure 1(a), due to the Gaussian noise in the gradients, the MSE with respect to the noisy image does not go to zero, and converges to a value that is close to the noise level as seen in Figure 1(b). It is also important to note that MAP inference alone doesn’t avoid overfitting. Figure 2 shows a version where weight decay is used to regularize parameters, which also overfits to the noise. Further experiments with inference procedures for denoising are described in Section 5.2.
We first study the effect of the architecture and input distribution on the covariance function of the stationary GP using 1D convolutional networks. We consider two architectures: (1) AutoEncoder: whereconv + downsampling blocks are followed by conv + upsampling blocks, and (2) Conv: where convolutional blocks without any upsampling or downsampling. We use ReLU non-linearity after each conv layer in both cases. We also vary the input covariance . Each channel of is first sampled iid from a zero-mean Gaussian with a variance . A simple way to obtain inputs with a spatial covariance
equal to a Gaussian with standard deviationis to then spatially filter channels of with a Gaussian filter with standard deviation .
Figure 3 shows the covariance function induced by varying the and depth of the two architectures (Figure 3a-b). We empirically estimated the covariance function by sampling many networks and inputs from the prior distribution. The covariance function for the convolutional-only architecture is also calculated using the recursion in Equation 7. For both architectures increasing and introduce longer-range spatial covariances. For the auto-encoder upsampling induces longer-range interactions even when is zero shedding some light on the role of upsampling in the deep image prior. Our network architectures have 128 filters, even so, the match between the empirical covariance and the analytic one is quite good as seen in Figure 3(b).
Figure 3(c) shows samples drawn from the prior of the convolutional-only architecture. Figure 3(d) shows the posterior mean and variance with SGLD inference where we randomly dropped 90% of the data from a 1D signal. Changing the covariance influences the mean and variance which is qualitatively similar to choosing the scale of stationary kernel in the GP: larger scales (bigger input
or depth) lead to smoother interpolations.
|(a) (AE)||(b) (Conv)||(c) Prior||(d) Posterior|
Throughout our experiments we adopt the network architecture reported in  for image denoising and inpainting tasks for a direct comparison with their results. These architectures are 5-layer auto-encoders with skip-connections and each layer contains 128 channels. We consider images from the standard image reconstruction datasets [6, 11]. For inference we use a learning rate of for image denoising and
for image inpainting. We compare the following inference schemes:
SGD+Early: Vanilla SGD with early stopping.
SGD+Early+Avg: Averaging the predictions with exponential sliding window of the vanilla SGD.
SGD+Input+Early: Perturbing the input with an additive Gaussian noise with mean zero and standard deviation at each learning step of SGD.
SGD+Input+Early+Avg: Averaging the predictions of the earlier approach with a exponential window.
SGLD: Averaging after burn-in iterations of posterior samples with SGLD inference.
We manually set the stopping iteration in the first four schemes to one with essentially the best reconstruction error — note that this is an oracle scheme and cannot be implemented in real reconstruction settings. For image denoising task, the stopping iteration is set as 500 for the first two schemes, and 1800 for the third and fourth methods. For image inpainting task, this parameter is set as 5000 and 11000 respectively.
The third and fourth variants were described in the supplementary material of  and in the released codebase. We found that injecting noise to the input during inference consistently improves results. However, as observed in , regardless of the noise variance , the network is able to drive the objective to zero, i.e., it overfits to the noise. This is also illustrated in Figure 1 (a-b).
Since the input can be considered as part of the parameters, adding noise to the input during inference can be thought of as approximate SGLD. It is also not beneficial to optimize in the objective and is kept constant (though adding noise still helps). SGLD inference includes adding noise to all parameters, and , sampled from a Gaussian distribution with variance scaled as the learning rate , as described in Equation 4. We used 7K burn-in iterations and 20K training iterations for image denoising task, 20K and 30K for image inpainting tasks. Running SGLD longer doesn’t improve results further. The weight-decay hyper-parameter for SGLD is set inversely proportional to the number of pixels in the image and equal to 5e-8 for a 10241024 image. For the baseline methods, we did not use weight decay, which, as seen in Figure 2, doesn’t influence results for SGD.
We first consider the image denoising task using various inference schemes. Each method is evaluated on a standard dataset for image denoising , which consists of 9 colored images corrupted with noise of .
Figure 2 presents the peak signal-to-noise ratio (PSNR) values with respect to the clean image over the optimization iterations. This experiment is on the “peppers” image from the dataset as seen in Figure 1. The performance of SGD variants (red, black and yellow curves) reaches a peak but gradually degrades. By contrast, samples using SGLD (blue curves) are stable with respect to PSNR, alleviating the need for early stopping. SGD variants benefit from exponential window averaging (dashed red and yellow lines), which also eventually overfits. Taking the posterior mean after burn in with SGLD (dashed blue line) consistently achieves better performance. The posterior mean at 20K iteration (dashed blue line with markers) achieves the best performance among the various inference methods.
Figure 7 shows a qualitative comparison of SGD with early stopping to the posterior mean of SGLD, which contains fewer artifacts. More examples are available in the Appendix. Table 1 shows the quantitative comparisons between the SGLD and the baselines. We run each method 10 times and report the mean and standard deviations. SGD consistently benefits from perturbing the input signal with noise-based regularization, and from moving averaging. However, as noted, these methods still have to rely on early stopping, which is hard to set in practice. By contrast, SGLD outperforms the baseline methods across all images. Our reported numbers (SGD + Input + Early + Avg) are similar to the single-run results reported in prior work (30.44 PSNR compared to ours of 30.330.03 PSNR.) SGLD improves the average PNSR to 30.81. As a reference, BM3D  obtains an average PSNR of 31.68.
|Input||SGD (28.38)||SGLD (30.82)|
For image inpainting we experiment on the same task as  where 50% of the pixels are randomly dropped. We evaluate various inference schemes on the standard image inpainting dataset  consisting of 11 grayscale images.
Table 2 presents a comparison between SGLD and the baseline methods. Similar to the image denoising task, the performance of SGD is improved by perturbing the input signal and additionally by averaging the intermediate samples during optimization. SGLD inference provides additional improvements; it outperforms the baselines and improves over the results reported in  from 33.48 to 34.51 PSNR. Figure 8 shows qualitative comparisons between SGLD and SGD. The posterior mean of SGLD has fewer artifacts than the best result generated by SGD variants.
Besides gains in performance, SGLD provides estimates of uncertainty. This is visualized in Figure 1(d). Observe that uncertainty is low in missing regions that are surrounded by areas of relatively uniform appearance such as the window and floor, and higher in non-uniform areas such as those near the boundaries of different object in the image.
|SGD + Early||26.74||28.42||29.17||23.50||29.76||26.61||28.68||30.07||29.78||28.08|
|SGD + Early + Avg||28.78||29.20||30.26||23.82||31.17||27.14||29.88||31.00||30.64||29.10|
|SGD + Input + Early||28.18||29.21||30.17||22.65||30.57||26.22||30.29||31.31||30.66||28.81|
|SGD + Input + Early + Avg||30.61||30.46||31.81||23.69||32.66||27.32||31.70||32.86||31.87||30.33|
|SGD + Early||28.48||31.54||35.34||35.00||30.40||27.05||30.55||32.24||31.37||31.32||30.21||31.23|
|SGD + Early + Avg||28.71||31.64||35.45||35.15||30.48||27.12||30.63||32.39||31.44||31.50||30.25||31.34|
|SGD + Input + Early||32.48||32.71||36.16||36.91||33.22||29.66||32.40||32.79||33.27||32.59||33.15||33.21|
|SGD + Input + Early + Avg||33.18||33.61||37.00||37.39||33.53||29.96||33.30||33.17||33.58||32.95||33.80||33.77|
|Ulyanov et al. ||32.22||33.06||39.16||36.16||33.05||29.80||32.52||32.84||32.77||32.2||34.54||33.48|
|Papyan et al. ||28.44||31.44||34.58||35.04||31.11||27.90||31.18||31.34||32.35||31.92||28.05||31.19|
We compare the deep image prior (DIP) and its Gaussian process (GP) counterpart, both as prior and for posterior inference, and as a function of the number of filters in the network. For efficiency we used a U-Net architecture with two downsampling and upsampling layers for the DIP.
|(a) DIP prior samples||(b) GP prior samples|
|(a) Input||(b) SGD (19.23 dB)||(c) SGD + Input (19.59 dB)||(d) SGLD mean (21.86 dB)|
|(a)||(b) GT||(c) Corrupted||(d) GP RBF (25.78)||(e) GP DIP (26.34)||(f) DIP (26.43)|
Comparison of the Radial basis function (RBF) kernel with the length scale learned on observed pixels in(c) and the stationary DIP kernel. Bottom (a) PSNR of the GP posterior with the DIP kernel and DIP as a function of the number of channels. DIP approaches the GP performance as the number of channels increases from 16 to 512. (d - f) Inpainting results (with the PSNR values) from GP with the RBF (GP RBF) and DIP (GP DIP) kernel, as well as the deep image prior. The DIP kernel is more effective than the RBF.
The above figure shows two samples each drawn from the DIP (with 256 channels per layer) and GP with the equivalent kernel. The samples are nearly identical suggesting that the characterization of the DIP as a stationary GP also holds for 2D signals. Next, we compare the DIP and GP on an inpainting task shown in Figure 6. The image size here is 6464. Figure 6 top (a) shows the RBF and DIP kernels as a function of the offset. The DIP kernels are heavy tailed in comparison to Gaussian with support at larger length scales. Figure 6 bottom (a) shows the performance (PSNR) of the DIP as a function of the number of channels from 16 to 512 in each layer of the U-Net, as well as of a GP with the limiting DIP kernel. The PSNR of the DIP approaches the GP as the number of channels increases suggesting that for networks of this size 256 filters are enough for the asymptotic GP behavior. Figure 6 (d-e) show that a GP with the DIP kernel is more effective than one with the RBF kernel, suggesting that the long-tail DIP kernel is better suited for modeling natural images.
While DIPs are asymptotically GPs, the SGD optimization may be preferable because GP inference is expensive for high-resolution images. The memory usage is and running time is for exact inference where is the number of pixels (e.g., a 500500 image requires 233 GB memory). The DIP’s memory footprint, on the other hand, scales linearly with the number of pixels, and inference with SGD is practical and efficient. This emphasizes the importance of SGLD, which addresses the drawbacks of vanilla SGD and makes the DIP more robust and effective. Finally, while we showed that the prior distribution induced by the DIP is asymptotically a GP and the posterior estimated by SGD or SGLD matches the GP posterior for small networks, it remains an open question if the posterior matches the GP posterior for deeper networks.
We presented a novel Bayesian view of the deep image prior, which parameterizes a natural image as the output of a convolutional network with random parameters and a random input. First, we showed that the output of a random convolutional network converges to a stationary zero-mean GP as the number of channels in each layer goes to infinity, and showed how to calculate the realized covariance. This characterized the deep image prior as approximately a stationary GP. Our work differs from prior work relating GPs and neural networks by analyzing the spatial covariance of network activations on a single input image. We then use SGLD to conduct fully Bayesian posterior inference in the deep image prior, which improves performance and prevents the need for early stopping. Future work can further investigate the types of kernel implied by convolutional networks to better understand the deep image prior and the inductive bias of deep convolutional networks in learning applications.
This research was supported in part by NSF grants #1749833, #1749854, and #1661259, and the MassTech Collaborative for funding the UMass GPU cluster.
We provide the proof of Theorem 1 and show additional visualizations of the denoising and inpainting results.
Let . First, observe that
where is the number of channels in the input, is the filter width. By stationarity, the terms are identically distributed with expected value , even though they are not necessarily independent. Since the channels are drawn independently, the parenthesized expressions in the sum over are iid with mean . According to Equation 2 in the main text, we have
Consider the sequence
By the strong law of large numbers we know the sequences, and each converge almost surely to their expected values , and , respectively. Since is stationary this is equal to , , and , respectively, where is the filter width. Thus we have that converges almost surely to .
Consider the continuous function for . Applying the continuous mapping theorem, we have that
|(a) GT||(b) Noisy GT||(c) SGD||(d) +Avg||(e) +Input||(f) +Input+Avg||(g) SGLD|
|(a) Corrupted GT||(b) SGD||(c) +Avg.||(d) +Input||(e) +Input+Avg||(f) SGLD||(g) Uncertainty|
Generalization in Neural Networks and Machine Learning, pages 215–237. Springer Verlag, January 1998.
Kernel Methods for Deep Learning.In Advances in Neural Information Processing Systems, pages 342–350, 2009.
Transforming Neural-net Output Levels to Probability Distributions.In Advances in Neural Information Processing Systems, pages 853–859, 1991.
Conference on Computational Learning Theory, pages 5–13. ACM, 1993.
Evaluation of Gaussian Processes and Other Methods for Non-linear Regression.University of Toronto, 1999.