1 Introduction
Generative modelling is the process of modelling a distribution in a highdimension space in a way that allows sampling in it. Generative Adversarial Networks (GANs) Goodfellow2014 have been the state of the art in unsupervised image generation for the past few years, being able to produce realistic images with high resolution Brock2018
without explicitly modelling the samples distribution. GANs learn a mapping function of vectors drawn from a low dimensional latent distribution (usually normal or uniform) to high dimensional ground truth images issued from an unknown and complex distribution. By using a discrimination function that distinguishes real images from generated ones, GANs setups a min max game able to approximate a JensenShannon divergence between the distributions of the real samples and the generated ones.
Among extensions of GANs, Conditional GAN (CGAN) mirza2014 attempts to condition the generation procedure on some supplementary information (such as the label of the image ) by providing to the generation and discrimination functions. CGAN enables a variety of conditioned generation, such as classconditioned image generation mirza2014
Isola2017; wang2018high, or image inpainting pathak2016context. On the other side, Ambient GAN bora2018ambientgan aims at training an unconditional generative model using only noisy or incomplete samples . Relevant application domain is highresolution imaging (CT scan, fMRI) where image sensing may be costly. Ambient GAN attempts to produce unaltered images which distribution matches the true one without accessing to the original images . For the sake, Ambient GAN considers lossy measurements such as blurred images, images with removed patch or removed pixels at random (up to 95%). Following this setup, Pajot et al.pajot2018unsupervised extend the learning strategy to enable the reconstruction instead of the generation of realistic images from similarly altered samples.In the spirit of Ambient GAN, we consider in this paper an extreme setting of image generation when only a few pixels, less than a percent of the image size, are known and are randomly scattered across the image (see Fig.0(c)). We refer to these conditioning pixels as a constraint map . To reconstruct the missing information, we design a generative adversarial model able to generate high quality images coherent with given pixel values by leveraging on a training set of similar, but not paired images. The model we propose aims to match the distribution of the real images conditioned on a highly scarce constraint map, drawing connections with Ambient GAN while, in the same manner as CGAN, still allowing the generation of diverse samples following the underlying conditional distribution.
To make the generated images honoring the prescribed pixel values, we use a reconstruction loss measuring how close real constrained pixels are to their generated counterparts. We show that minimizing this loss is equivalent to maximizing the loglikelihood of the constraints given the generated image. Thereon we derive an objective function tradingoff the adversarial loss of GAN and the reconstruction loss which acts as a regularization term. We analyze the influence of the related hyperparameter in terms of quality of generated images and the respect of the constraints. Specifically, empirical evaluation on FashionMNIST Xiao2017 evidences that the regularization parameter allows for controlling the tradeoff between samples quality and constraints fulfillment.
Additionally to show the effectiveness of our approach, we conduct experiments on CIFAR10 Krizhevsky2009CIFAR10, CelebA liu2015celeba or texture jetchev2016texture datasets using various deep architectures including fully convolutional network. We also evaluate our method on a classical geological problem which consists of generating 2D geological images of which the spatial patterns are consistent with those found in a conceptual image of a binary fluvial aquiferStrebelle2002laloy2018training. Empirical findings reveal that the used architectures may lack stochasticity from the generated samples that is the GAN input is often mapped to the same output image irrespective of the variations in latent code yang2018diversitysensitive. We address this issue by resorting to the recent PacGAN lin2018pacgan strategy. As a conclusion, our approach performs well both in terms of visual quality and respect of the pixel constraints while keeping diversity among generated samples. Evaluations on CIFAR10 and CelebA show that the proposed generative model always outperforms the CGAN approach on the respect of the constraints and either come close or outperforms it on the visual quality of the generated samples.
The remainder of the paper is organized as follows. In Section 2, we review the relevant related work focusing first on generative adversarial networks, their conditioned version and then on methods dealing with image generation and reconstruction from highly altered training samples. Section 3 details the overall generative model we propose. In Section 4, we present the experimental protocol and evaluation measures while Section 5 gathers quantitative and qualitative effectiveness of our approach. The last section concludes the paper.
The contributions of the paper are summarized as follows:

We propose a method for learning to generate images with a few pixelwise constraints.

A theoretical justification of the modelling framework is investigated.

A controllable tradeoff between the image quality and the constraints’ fulfillment is highlighted,

We showcase a lack of diversity in generating highdimensional images which we solve by using PacGANlin2018pacgan technique. Several experiments allow to conclude that the proposed formulation can effectively generate diverse and high visual quality images while satisfying the pixelwise constraints.
2 Image reconstruction with GAN in related works
The pursued objective of the paper is image generation using generative deep network conditioned on randomly scattered and scarce (less than a percent of the image size) pixel values. This kind of pixel constraints occurs in application domains where an image or signal need to be generated from very sparse measurements.
Before delving into the details, let introduce the notations and previous work related to the problem. We denote by
a random variable and
its realization. Let be the distribution of over and be its evaluation at . Similarly represents the distribution of conditioned on the random variable .Given a set of images (see Figure 0(a)) drawn from an unknown distribution and a sparse matrix (Figure 0(c)) as the given constrained pixels, the problem consists in finding a generative model with inputs (a random vector sampled from a known distribution over the space ) and constrained pixel values able to generate an image satisfying the constraints while likely following the distribution (see Figure 3).
One of the stateoftheart modelling framework for image generation is the Generative Adversarial Network. The seminal version of GAN Goodfellow2014 learns the generative models in an unsupervised way. It relies on a game between a generation function and a discrimination network , in which learns to produce realistic samples while learns to distinguish real examples from generated ones (Figure 1(a)). Training GANs amounts to find a Nash equilibrium to the following minmax problem,
(1) 
where is a known distribution, usually normal or uniform, from which the latent input of is drawn, and is the distribution of the real images.
Among several applications, the GANs was adapted to image inpainting task (Figure 0(b)). For instance Yeh et al. Yeh2017 propose an inpainting approach which considers a pretrained generator, and explores its latent space through an optimization procedure to find a latent vector , which induces an image with missing regions filled in by conditioning on the surroundings available information. However, the method requires to solve a full optimization problem at inference stage, which is computationally expensive.
Other approaches (Figure 2) rely on Conditional variant of GAN (CGAN) mirza2014 in which additional information is provided to the generator and the discriminator (see Figure 1(b)). This leads to the following optimization problem adapted to CGAN
(2) 
Although CGAN was initially designed for classconditioned image generation by setting as the class label of the image, several types of conditioning information can apply such as a full image for imagetoimage translation Isola2017 or partial image as in inpainting Yu2018. CGANbased inpainting methods rely on generating a patch that will fill up a structured missing part of the image and achieve impressive results. However they are not well suited to reconstruct very sparse and unstructured signal demir2018. Additionally, these approaches learn to reconstruct a single sample instead of a full distribution, implying that there is no sampling process for a given constraint map or highly degraded image.
AmbientGAN bora2018ambientgan (Figure 1(c)) trains a generative model capable to yield full images from only lossy measurements. One of the image degradations considered in this approach is the random removal of pixels leading to sparse pixel map . It is simulated with a differentiable function whose parameter indicates the pixels to be removed. The underlying optimization problem solved by AmbientGAN is therefore stated as
(3) 
Pajot et al. pajot2018unsupervised combined the AmbientGAN approach with an additional reconstruction task that consists in reconstructing the from the twicealtered image and ,
(4) 
The norm term ensures that the generator is able to learn to revert i.e. to revert the alteration process on a given sample. This allows the reconstruction of realistic image only from a given constraint map . However the reconstruction process is deterministic and does not provide a sampling mechanism.
Compressed Sensing with MetaLearning wu2019deep is an approach that combines the exploration of the latent space to recover images from lossy measurements with the enforcing of the Restricted Isometric Property candes2005decoding, which states that for two samples ,
where is a small constant. It replaces the adversarial training of the generative model (Eq. 1) by searching, for a given degraded image , a vector such that minimizes the distance between and while still enforcing the RIP. The overall problem induced by this approach can be formulated as:
(5) 
where contains the three samples . In practice, is computed with gradient descent on by minimizing , and starting from a random . As a benefit, this approach may generate an image from a noisy information but at a high computation burden since it requires to solve an optimization problem (computing ) at inference stage for generating an image.
3 Proposed approach
Let introduce the formal formulation of the addressed problem. Assume is the given set of constrained pixel values. To ease the presentation, let consider as a image with only a few available pixels (less than of ). We will also encode the spatial location of these pixels using a corresponding binary mask . We intend to learn a GAN whose generation network takes as input the constraint map and the sampled latent code and outputs a realistic image that fulfills the prescribed pixel values. Within this setup, the generative model can sample from the unknown distribution of the training images while satisfying unseen pixelwise constraints at training stage. Formally our proposed GAN can be formulated as
(6)  
where stands for the Hadamard (or pointwise) product and for the mask, a sparse matrix with entries equal to one at constrained pixels location.
As the equality constraint in Problem (6) is difficult to enforce during training, we rather investigate a relaxed version of the problems. Following Pajot et al. pajot2018unsupervised we assume that the constraint map is obtained through a noisy measurement process
(7) 
Here is the masking operator yielding to . Also the constrained pixels are randomly and independently selected.
represents an additive i.i.d noise corrupting the pixels. Therefore we can formulate the Maximum A Posteriori (MAP) estimation problem, which, given the constraint map
, consists in finding the most probable image
following the posterior distribution ,(8)  
(9) 
is the likelihood that the constrained pixels are issued from image while
represents the prior probability at
. Assuming that the generation network may sample the most probable image complying with the given pixel values , we get the following problem(10) 
The first term in Problem (10) measures the likelihood of the constraints given a generated image. Let rewrite Equation (7) as where is the vectorisation operator that consists in stacking the constrained pixels. Therefore, assuming is an i.i.d Gaussian noise with distribution , we achieve the expression of the conditional likelihood
(11) 
which evaluates the quadratic distance between the conditioning pixels and their predictions by . In other words, using a matrix notation of (7), the likelihood of the constraints given a generated image equivalently writes
(12) 
represents the squared Frobenius norm of matrix that is the sum of its squared entries.
The second term in Problem (10) is the likelihood of the generated image under the true but unknown data distribution . Maximizing this term can be equivalently achieved by minimizing the distance between and the marginal distribution of the generated samples . This amounts to minimizing with respect to , the GANlike objective function Goodfellow2014. Putting altogether these elements, we can propose a relaxation of the hard constraint optimization problem (6) (Figure 1(d)) as follows
Remarks:

The assumption of Gaussian noise measurement leads us to explicitly turn the pixel value constraints into the minimization of the norm between the real enforced pixel values and their generated counterparts (see Figure 1(d)).

This additional term acts as a regularization over prescribed pixels by the mask . The tradeoff between the distribution matching loss and the constraint enforcement is assessed by the regularization parameter .

It is worth noting that the noise can be of any other distribution, according to the prior information, one may associate to the measurement process. We only require this distribution to admit a closedform solution for the maximum likelihood estimation for optimization purpose. Typical choices are distributions from the exponential family brown1986fundamentals.
To solve Problem (3
), we use the stochastic gradient descent method. The overall training procedure is detailed in Algorithm
1and ends up when a maximal number of training epochs is attained.
When implementing this training procedure we experienced, at inference stage, a lack of diversity in the generated samples (see Figure 5) with deeper architectures, most notably the encoderdecoder architectures. This issue manifests itself through the fact that the learned generation network, given a constraint map , outputs almost deterministic image regardless the variations in the input . The issue was also pointed out by Yang et al. yang2018diversitysensitive as characteristic of CGANs.
To avoid the problem, we exploit the recent PacGAN lin2018pacgan technique: it consists in passing a set of samples to the discrimination function instead of a single one. PacGAN is intended to tackle the mode collapse problem in GAN training. The underlying principle being that if a set of images are sampled from the same training set, they are very likely to be completely different, whereas if the generator experiences mode collapse, generated images are likely to be similar. In practice, we only give two samples to the discriminator, which is sufficient to overcome the loss of diversity as suggested in lin2018pacgan. The resulting training procedure is summarized in Algorithm 2.
4 Experiments
We have conducted a series of empirical evaluation to assess the performances of the proposed GAN. Used datasets, evaluation protocol and the tested deep architectures are detailed in this section while Section 5 is devoted to the results presentation.
4.1 Datasets
We tested our approach on several datasets listed hereafter. Detailed information on these datasets are provided in the Appendix A.

FashionMNIST Xiao2017 consists of 60,000 small grayscale images of fashion items, split in 10 classes and is a harder version of the classical MNIST dataset lecun1998. The very small size of the images makes them particularly appropriate for largescale experiments, such as hyperparameter tuning.

CIFAR10 Krizhevsky2009CIFAR10 consists of 60,000 colour images of 10 different and varied classes. It is deemed less easy than MNIST and FashionMnist

CelebAliu2015celeba is a large dataset of celebrity portraits labeled by identity and a variety of binary features such as eyeglasses, smiling… We use 100,000 images cropped to a size of , making this dataset appropriate for a high dimension evaluation of our approach in comparison with related work.

Texture is a custom dataset composed of patches sampled from a large brick wall texture, as recommended in jetchev2016texture. It is worth noting that this procedure can be reproduced on any texture image of sufficient size. Texture is a testbed of our approach on fullyconvolutional networks for constrained texture generation task.

Subsurface is a classical dataset in geological simulation Strebelle2002 which consists, similarly to the Texture dataset, of 20,000 patches sampled from a model of a subsurface binary domain. These models are assumed to have the same properties as a texture, mainly the property of global ergodicity of the data.
To avoid learning explicit pairing of real images seen by the discrimination function with constraint maps provided to the generative network, we split each dataset into training, validation and test sets, to which we add a set composed of constraint maps that should remain unrelated to the three others. In order to do so, a fifth of each set is used to generate the constrained pixel map by randomly selecting
of the pixels from a uniform distribution, composing a set of constraints for each of the train, test and validation sets. The images from which these maps are sampled are then removed from the training, testing and validation sets. For each carried experiment the best model is selected based on some performance measures (see Section
4.3) computed on the validation set, as in the standard of machine learning methodology
oneto2019. Finally, reported results are computed on the test set.4.2 Network architectures
We use a variety of GAN architectures in order to adapt to the different scales and image sizes of our datasets. The detailed configuration of these architectures are exposed in Appendix B.
For the experiments on the FashionMNIST Xiao2017, we use a lightweight network for both the discriminator and the generator similarly to DCGAN Radford2015 due to the small resolution of FashionMnist images.
To experiment on the Texture dataset, we consider a set of fullyconvolutional generator architectures based on either dilated convolutions yu2015multi, which behave well on texture datasets ruffino2018dilated, or encoderdecoder architectures that are commonly used in domaintransfer applications such as CycleGAN Zhu2017unpaired. We selected these architectures because they have very large receptive fields without using pooling, which allow the generator to use a large context for each pixel.
We keep the same discriminator across all the experiments with these architectures, the PatchGAN discriminator Isola2017, which is a fivelayer fullyconvolutional network with a sigmoid activation.
The UpDil architecture consists in a set of transposed convolutions (the upscaling part), and a set of dilated convolutional layers yu2015multi, while the UpEncDec has an upscaling part followed by an encoderdecoder section with skipconnections, where the constraints are downscaled, concatenated to the noise, and reupscaled to the output size.
The UNet ronneberger2015u architecture is an encoderdecoder where skipconnections are added between the encoder and the decoder. The Res architecture is an encoderdecoder where residual blocks he2016deep are added after the noise is concatenated to the features. The UNetRes combines the UNet and the Res architectures by including both residual blocks and skipconnections.
Finally, we will evaluate our approach on the Subsurface dataset using the architecture that yields to the best performances on the Texture dataset.
4.3 Evaluation
We evaluate our approach based on both the satisfaction of the pixel constraints and the visual quality of sampled images. From the assumption of Gaussian measurement noise (as discussed in Section 3), we assess the constraint fulfillment using the following mean square error (MSE)
(14) 
This metric should be understood as the mean squared error of reconstructing the constrained pixel values.
Visual quality evaluation of an image is not a trivial task Theis2015. However, Fréchet Inception Distance (FID) Heusel2017 and Inception Score Salimans2016, have been used to evaluate the performance of generative models. We employ FID since the Inception Score has been shown to be less reliable Barratt
. The FID consists in computing a distance between the distributions of relevant features extracted from generated and real samples. To extract these features, a pretrained Inception v3
Szegedy2016classifier is used to compute the embeddings of the images at a chosen layer. Assuming these embeddings shall follow a normal distribution, the quality of the generated images is assessed in term of a Wasserstein2 distance between the distribution of real samples and generated ones. Hence the FID writes(15) 
where is the trace operator, (, ) and (, ) are the pairs of mean vector and covariance matrice of embeddings obtained on respectively the real and the generated data. Being a distance between distributions, a small FID corresponds to a good matching of the distributions.
Since the FID requires a pretrained classifier adapted to the dataset in study, we trained simple convolutional neural networks as classifiers for the FashionMNIST and the CIFAR10 datasets. For the Texture dataset, since the dataset is not labeled, we resort to a CNN classifier trained on the Describable Textures Dataset (DTD)
cimpoi14describing, which is a related application domain.However, since we do not have labels for the Subsurface dataset, we could not train a classifier for this dataset, thus we cannot compute the FID. To evaluate the quality of the generated samples, we use metrics based on a distance between feature descriptors extracted from real samples and generated ones. Similarly to ruffino2018dilated, we rely on a distance between the Histograms of Oriented Gradients (HOG) or Local Binary Patterns (LBP) features computed on generated and real images.
Histograms of Oriented Gradients (HOG) Dalal and Local Binary Patterns (LBP) Pietikainen2011 are computed by splitting an image into cells of a given radius and computing on each cell the histograms of the oriented gradients for HOGs and of the light level differences for each pixel to the center of the cell for LBPs. Additionally, we consider the domainspecific metric, the connectivity function lemmens2017 which is presented in Appendix C.
Finally, we check by visual inspection if the trained model is able to generate diverse samples, meaning that for a given and for a set of latent codes , the generated samples are visually different.
5 Experimental results
5.1 Qualityfidelity tradeoff
We first study the influence of the regularization hyperparameter on both the quality of the generated samples and the respect of the constraints. We experiment on the FashionMNIST Xiao2017 dataset, since such a study requires intensive simulations permitted by the low resolution of FashionMnist images and the used architectures (see Section 4.2).
To overcome classical GANs instability, the networks are trained 10 times and the median values of the best scores on the test set at the best epoch are recorded. The epoch that minimizes:
on the validation set is considered as the best epoch, where , , and are respectively the lowest and highest FIDs and MSEs obtained on the validation set.
Empirical evidences (highlighted in Figure 4) show that with a good choice of , the regularization term helps the generator to enforce the constraints, leading to smaller MSEs than when using the CGAN () without compromising on the quality of generated images. Also, we can note that using the regularization term even leads to a better image quality compared to GAN and CGAN. The bottom panel in Figure 4 illustrates that the tradeoff between image quality and the satisfaction of the constraints can be controlled by appropriately setting the value of . Nevertheless, for small values of (less or equal to ), our GAN model fails to learn meaningful distribution of the training images and only generates uniformly black images. This leads to the plateaus on the MSE and FID plots (top panels in Figure 4).
5.2 Texture generation with fullyconvolutional architectures
Fullyconvolutional architectures for GANs are widely used, either for domaintransfer applications Zhu2017unpairedIsola2017 or for texture generation jetchev2016texture. In order to evaluate the efficiency of our method on relatively high resolution images, we experiment the fullyconvolutional networks described in Section 4.2 on a texture generation task using Texture dataset. We investigate the upscalingdilatation network, the encoderdecoder one and the resnetlike architectures.
Our training algorithm was run for 40 epochs on all reported results. We provide a comparison to CGANmirza2014 approach by using the selected best architectures. The models are evaluated in terms of best FID (visual quality of sampled images) at each epoch and MSE (conditioning on fixed pixel values). We also compute the FID score of the models at the epochs where the MSE is the lowest. In the other way around, the MSE is reported at epoch when the FID is the lowest. The obtained quantitative results are detailed in Table 1.
For the encoderdecoder models, we can notice that the models using ResNet blocks perform better than just using a UNet generator. A tradeoff can also be seen between the FID and MSE for the ResNet models and the UNetResNet, which could mean that skipconnections help the generator to fulfill the constraints but at the price of lowered visual quality.
Although the encoderdecoder models perform the best, they tend to lose diversity in the generated samples (see Figure 5), whereas the upscalingbased models have high FID and MSE but naturally preserve diversity in the generated samples.
Changing the discriminator for a PacGAN discriminator with 2 samples in the encoderdecoder based architectures allows to restore diversity, while keeping the same performances as previously or even increasing the performances for the UNetRes (see Table 1).
Table 2 compares our proposed approach to CGAN using fully convolutional networks. It shows that our approach is more able to comply with the pixel constraints while producing realistic images. Indeed, our approach outperforms CGAN (see Table 2) by a large margin on the respect of conditioning pixels (see the achieved MSE metrics by our UNetPAC or UNetResPAC) and gets close FID performance on the generated samples. This finding is in accordance of the obtained results on FashionMnist experiments.
Model  Best FID  Best MSE  FID at  MSE at  Diversity 

best MSE  best FID  
UpDil  0.0949  0.4137  1.0360  0.7057  ✓ 
UpEncDec  0.1509  0.7570  0.2498  0.9809  ✓ 
UNet  0.0442  0.1789  0.0964  0.4559  ✗ 
Res  0.0458  0.0474  0.0590  0.0476  ✗ 
UNetRes  0.0382  0.0307  0.0499  0.0338  ✗ 
ResPAC  0.0350  0.0698  0.0466  0.4896  ✓ 
UNetPAC  0.0672  0.0001  0.3120  0.2171  ✓ 
UNetResPAC  0.0431  0.0277  0.0447  0.0302  ✓ 
Model  Best FID  Best MSE  FID at  MSE at 

best MSE  best FID  
CGANResPAC  0.0234  0.1337  0.0340  0.2951 
CGANUNetPAC  0.0518  0.2010  0.0705  0.4828 
CGANUNetResPAC  0.0428  0.1060  0.0586  0.2250 
OursResPAC  0.0350  0.0698  0.0466  0.4896 
OursUNetPAC  0.0672  0.0001  0.3120  0.2171 
OursUNetResPAC  0.0431  0.0277  0.0447  0.0302 
5.3 Extended architectures
We extend the comparison of our approach to CGAN on the CIFAR10 and CelebA datasets (Table 3). We investigated the architectures described in Section 4.2. All reported results are obtained with the regularization parameter fixed to . We train the networks for 150 epochs using the same dataset split as stated previously in order to keep independence between the images constraint maps. The evaluation procedure remains also unchanged. We use the PacGAN approach to avoid the loss of diversity issues. The experiments on both datasets show that though CGAN provides better results in terms of visual quality, our approach outperforms it according to the respect of the pixel constraints.
Model  Best FID  Best MSE  FID at  MSE at  
best MSE  best FID  
CIFAR10  CGAN  2,68  0.081  2.68  0.081 
Ours  3.120  0.010  3.530  0.011  
CelebA  CGAN  1.34e4  0.0209  1.81e4  0.0450 
Ours  2.09e4  0.0053  5.392e4  0.0249 
5.4 Application to hydrogeology
Finally, we evaluate our approach on the Subsurface dataset. We use the UNetResPAC architecture, since it performed the best on Texture data as exposed in Section 5.2. As previously, we simply set the regularization parameter at and, the network is trained for 40 epochs using the same experimental protocol. To evaluate the tradeoff between the visual quality and the respect of the constraints, instead of FID we rather compute distances between visual Histograms of Oriented Gradients (see Section 4), extracted from real and generated samples. We also evaluate the visual quality of our approach with a distance between Local Binary Patterns. Indeed, Subsurface application lacks labelled data in order to learn a deep network classifier from which the FID score can be computed.
The obtained results are summarized in Tables 4 and 5. They are coherent with the previous experiments since the generated samples are diverse and have a low error regarding the constrained pixels. The conditioning have a limited impact on the visual quality of the generated samples and compares well to unconditional approaches ruffino2018dilated. Evaluation of the generated images using the domainconnectivity function highlights this fact on Figures 7 and 7 in the supplementary materials. Also examples of generated images by our approach pictured in Figure 9 (see appendix D) show that we preserve the visual quality and honor the constraints.
Model  Best HOG  Best MSE  HOG at  MSE at  

best MSE  best HOG  
Subsurface  CGAN  2.92e4  0.2505  3.06e4  1.1550 
Ours  4.31e4  0.0325  5.69e4  0.2853 
Model  Best HOG  Best MSE  Best LBP  Best LBP  

(radius=1)  (radius=2)  
Subsurface  CGAN  2.92e4  0.2505  2.157  3.494 
Ours  4.31e4  0.0325  10.142  16.754 
Conclusion
In this paper, we address the task of learning effective generative adversarial networks when only very few pixel values are known beforehand. To solve this pixelwise conditioned GAN, we model the conditioning information under a probabilistic framework. This leads to the maximization of the likelihood of the constraints given a generated image. Under the assumption of a Gaussian distribution over the given pixels, we formulate an objective function composed of the conditional GAN loss function regularized by a
norm on pixel reconstruction errors. We describe the related optimization algorithm.Empirical evidences illustrate that the proposed framework helps obtaining good image quality while best fulfilling the constraints compared to classical GAN approaches. We show that, if we include the PacGAN technique, this approach is compatible with fullyconvolutional architectures and scales well to large images. We apply this approach to a common geological simulation task and show that it allows the generation of realistic samples which fulfill the prescribed constraints.
In future work, we plan to investigate other prior distributions for the given pixels as the Laplacian or distribtutions. We are also interested in applying the developed approach to other applications or signals such as audio inpainting marafioti2018context.
Acknowledgements
This research was supported by the CNRS PEPS I3A REGGAN project and the ANR16CE230006 grant Deep in France. We kindly thank the CRIANN for the provided highcomputation facilities.
References
Appendix A Details of the datasets
Dataset  Size (in pixels)  Training set  Validation set  Test set 

FashionMNIST  28x28  55,000  5,000  10,000 
Cifar10  32x32  55,000  5,000  10,000 
CelebA  128x128  80,000  5,000  15,000 
Texture  160x160  20,000  2,000  4,000 
Subsurface  160x160  20,000  2,000  4,000 
Additional information:

For FashionMNIST and Cifar10, we keep the original train/test split and then sample 5000 images from the training set that act as validation samples.

For the Texture dataset, we sample patches randomly from a 3840x2400 image of a brick wall.
Appendix B Detailed deep architectures
b.1 DCGAN for FashionMNIST
Layer type  Units  Scaling  Activation  Output shape 

Input z        7x7 
Input y        28x28 
Dense  343    ReLU  7x7 
Conv2DTranspose  128 3x3  x2  ReLU  14x14 
Conv2DTranspose  64 3x3  x2  ReLU  28x28 
Conv2DTranspose  1 3x3  x1  tanh  28x28 
Input x        28x28 
Input y        28x28 
Conv2D  64 3x3  x1/2  LeakyReLU  14x14 
Conv2D  128 3x3  x1/2  LeakyReLU  7x7 
Conv2D  1 3x3  x1  tanh  28x28 
Dense  1    Sigmoid  1 
Additional information:

Batch normalizationioffe2015batch is applied across all the layers

A Gaussian noise is applied to the input of the discriminator
b.2 UNetRes for CIFAR10
Layer type  Units  Scaling  Activation  Output shape 

Input y        32x32 
Conv2D*  64 5x5  x1  ReLU  32x32 
Conv2D*  128 3x3  x1/2  ReLU  16x16 
Conv2D*  256 3x3  x1/2  ReLU  8x8 
Input z        8x8 
Dense  256    ReLU  8x8 
Residual block  3x256 3x3  x1  ReLU  8x8 
Residual block  3x256 3x3  x1  ReLU  8x8 
Residual block  3x256 3x3  x1  ReLU  8x8 
Residual block  3x256 3x3  x1  ReLU  8x8 
Conv2DTranspose*  256 3x3  x2  ReLU  16x16 
Conv2DTranspose*  128 3x3  x2  ReLU  32x32 
Conv2DTranspose*  64 3x3  x1  ReLU  32x32 
Conv2D  3 3x3  x1  tanh  32x32 
Input x        32x32 
Input y        32x32 
Conv2D  64 3x3  x1/2  LeakyReLU  16x16 
Conv2D  128 3x3  x1/2  LeakyReLU  8x8 
Conv2D  256 3x3  x1/2  LeakyReLU  4x4 
Dense  1    Sigmoid  1 
Additional information:

Instance normalizationulyanov2016instance is applied across all the layers instead of Batch normalization. This is involved by the use of the PacGAN technique.

A Gaussian noise is applied to the input of the discriminator

The layers noted with an asterisk are linked with a skipconnection
b.3 UNetRes for CelebA
Layer type  Units  Scaling  Activation  Output shape 

Input y        128x128 
Conv2D  64 5x5  x1  ReLU  128x128 
Conv2D*  128 3x3  x1/2  ReLU  64x64 
Conv2D*  256 3x3  x1/2  ReLU  32x32 
Conv2D*  512 3x3  x1/2  ReLU  16x16 
Input z        16x16 
Dense  256    ReLU  16x16 
Residual block  3x256 3x3  x1  ReLU  16x16 
Residual block  3x256 3x3  x1  ReLU  16x16 
Residual block  3x256 3x3  x1  ReLU  16x16 
Residual block  3x256 3x3  x1  ReLU  16x16 
Residual block  3x256 3x3  x1  ReLU  16x16 
Residual block  3x256 3x3  x1  ReLU  16x16 
Conv2DTranspose*  256 3x3  x2  ReLU  32x32 
Conv2DTranspose*  128 3x3  x2  ReLU  64x64 
Conv2DTranspose*  64 5x5  x2  ReLU  128x128 
Conv2D  3 3x3  x1  tanh  128x128 
Input x        128x128 
Input y        128x128 
Conv2D  64 3x3  x1/2  LeakyReLU  64x64 
Conv2D  128 3x3  x1/2  LeakyReLU  32x32 
Conv2D  256 3x3  x1/2  LeakyReLU  16x16 
Conv2D  512 3x3  x1/2  LeakyReLU  32x32 
Dense  1    Sigmoid  1 
This network follows the same additional setup as described in Appendix (B.2).
b.4 Architectures for Texture
b.4.1 PatchGAN discriminator
Layer type  Units  Scaling  Activation  Output shape 

Input x        160x160 
Input y        160x160 
Conv2D  64 3x3  x1/2  LeakyReLU  80x80 
Conv2D  128 3x3  x1/2  LeakyReLU  40x40 
Conv2D  256 3x3  x1/2  LeakyReLU  20x20 
Conv2D  512 3x3  x1/2  LeakyReLU  10x10 
b.4.2 UpDil Texture
Layer type  Units  Scaling  Activation  Output shape 

Input z        20x20 
Conv2DTranspose  256 3x3  x2  ReLU  40x40 
Conv2DTranspose  128 3x3  x2  ReLU  80x80 
Conv2DTranspose  64 3x3  x2  ReLU  160x160 
Input y        160x160 
Conv2D  64 3x3 dil. 1  x1  ReLU  160x160 
Conv2D  128 3x3 dil. 2  x1  ReLU  160x160 
Conv2D  256 3x3 dil. 3  x1  ReLU  160x160 
Conv2D  512 3x3 dil. 4  x1  ReLU  160x160 
Conv2D  3 3x3  x1  tanh  160x160 
b.4.3 UpEncDec Texture
Layer type  Units  Scaling  Activation  Output shape 

Input z        20x20 
Conv2DTranspose  256 3x3  x2  ReLU  40x40 
Conv2DTranspose  128 3x3  x2  ReLU  80x80 
Conv2DTranspose  64 5x5  x2  ReLU  160x160 
Input* y        160x160 
Conv2D*  64 3x3  x1/2  ReLU  80x80 
Conv2D*  128 3x3  x1/2  ReLU  40x40 
Conv2D  256 3x3  x1/2  ReLU  20x20 
Conv2DTranspose*  256 3x3  x2  ReLU  40x40 
Conv2DTranspose*  128 3x3  x2  ReLU  80x80 
Conv2DTranspose*  64 3x3  x2  ReLU  160x160 
Conv2D  3 3x3  x1  tanh  160x160 
b.4.4 UNet Texture
Layer type  Units  Scaling  Activation  Output shape 

Input y        160x160 
Conv2D  64 5x5  x1  ReLU  160x160 
Conv2D*  128 3x3  x1/2  ReLU  80x80 
Conv2D*  256 3x3  x1/2  ReLU  40x40 
Conv2D*  512 3x3  x1/2  ReLU  20x20 
Input z        20x20 
Conv2DTranspose*  256 3x3  x2  ReLU  40x40 
Conv2DTranspose*  128 3x3  x2  ReLU  80x80 
Conv2DTranspose*  64 5x5  x2  ReLU  160x160 
Conv2D  3 3x3  x1  tanh  160x160 
b.4.5 Res Texture
Layer type  Units  Scaling  Activation  Output shape 

Input y        160x160 
Conv2D  64 5x5  x1  ReLU  160x160 
Conv2D  128 3x3  x1/2  ReLU  80x80 
Conv2D  256 3x3  x1/2  ReLU  40x40 
Conv2D  512 3x3  x1/2  ReLU  20x20 
Input z        20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Conv2DTranspose  256 3x3  x2  ReLU  40x40 
Conv2DTranspose  128 3x3  x2  ReLU  80x80 
Conv2DTranspose  64 5x5  x2  ReLU  160x160 
Conv2D  3 3x3  x1  tanh  160x160 
b.4.6 UNetRes Texture
Layer type  Units  Scaling  Activation  Output shape 

Input y        160x160 
Conv2D  64 5x5  x1  ReLU  160x160 
Conv2D*  128 3x3  x1/2  ReLU  80x80 
Conv2D*  256 3x3  x1/2  ReLU  40x40 
Conv2D*  512 3x3  x1/2  ReLU  20x20 
Input z        20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Residual block  3x256 3x3  x1  ReLU  20x20 
Conv2DTranspose*  256 3x3  x2  ReLU  40x40 
Conv2DTranspose*  128 3x3  x2  ReLU  80x80 
Conv2DTranspose*  64 5x5  x2  ReLU  160x160 
Conv2D  3 3x3  x1  tanh  160x160 
As for Cifar10, this network follows the same additional setup described in Appendix (B.2).
Appendix C Domainspecific metrics for underground soil generation
In this section, we compute the connectivity function lemmens2017 of generated soil image, a domainspecific metric, which is the probability that a continuous pixel path exists between two pixels of the same value (called Facies) in a given direction and a given distance (called Lag). This connectivity function should be similar to the one obtained on realworld samples. In this application, the connectivity function models the probability that two given pixels are from the same sand brick or clay matrix zone.
We sampled 100 real and 100 generated images using the UNetResPAC architecture (see Section 4.2) on which the connectivity function was evaluated for both the CGAN and our approach. The obtained graphs are shown respectively in Figures 6 and 7.
The blue curves are the mean value for the real samples, and the blue dashed curves are the minimum and maximum values on these samples. The green curves are the connectivity functions for each of the 100 synthetic samples and the red curves are their mean connectivity functions. From these curves we observe that that our approach has similar connectivity functions as the CGAN approach while being significantly better at respecting the given constraints (see Section Table 4).
Appendix D Additional samples from the Texture and Subsurface datasets
In this section, we show some samples generated with the UNetResPAC architecture, which performs the best in our experiments (see Section 5) compared to real images sampled from the Texture (Figure 8) and Subsurface (Figure 9) datasets. For the generated samples, the enforced pixel constraints are colored in the images, green corresponding to a squared error less than and red otherwise.
Comments
There are no comments yet.