The Angel is in the Priors: Improving GAN based Image and Sequence Inpainting with Better Noise and Structural Priors

08/16/2019 ∙ by Avisek Lahiri, et al. ∙ IIT Kharagpur 21

Contemporary deep learning based inpainting algorithms are mainly based on a hybrid dual stage training policy of supervised reconstruction loss followed by an unsupervised adversarial critic loss. However, there is a dearth of literature for a fully unsupervised GAN based inpainting framework. The primary aversion towards the latter genre is due to its prohibitively slow iterative optimization requirement during inference to find a matching noise prior for a masked image. In this paper, we show that priors matter in GAN: we learn a data driven parametric network to predict a matching prior for a given image. This converts an iterative paradigm to a single feed forward inference pipeline with a massive 1500X speedup and simultaneous improvement in reconstruction quality. We show that an additional structural prior imposed on GAN model results in higher fidelity outputs. To extend our model for sequence inpainting, we propose a recurrent net based grouped noise prior learning. To our knowledge, this is the first demonstration of an unsupervised GAN based sequence inpainting. A further improvement in sequence inpainting is achieved with an additional subsequence consistency loss. These contributions improve the spatio-temporal characteristics of reconstructed sequences. Extensive experiments conducted on SVHN, Standford Cars, CelebA and CelebA-HQ image datasets, synthetic sequences and ViDTIMIT video datasets reveal that we consistently improve upon previous unsupervised baseline and also achieve comparable performances(sometimes also better) to hybrid benchmarks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 6

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image inpainting usually refers to filling up of holes or masked regions with plausible pixel values coherent with the neighborhood context. Traditional techniques [2, 9] were mainly successful in inpainting background and scenes with repetitive textures by matching and copying background patches into holes. However, these methods fail on cases where patterns are unique or non repetitive such as on faces and objects. Also, these methods fail to capture higher semantics of the scene. With the recent breakthrough in generative models such as Variational Autoencoeder (VAE)[14] and Generative Adversarial Networks (GAN) [8]

, inpainting, in general, is seen as an image completion problem. There are mainly two schools of approach, viz. a) completely unsupervised: conditioned on a prior latent/noise vector

[29] b) mixture of supervised + unsupervised: conditioned on masked image [23, 11]. The latter methods heavily depend on an initial phase of fully supervised training (reconstruction loss between original and inpainted outputs within the mask), followed by refinement stage with adversarial loss to add high frequency components in reconstructions. Going against the trend, we feel, the true essence of GAN lies in its ability to generate data within a completely unsupervised framework. The former method of [29] is thus more difficult to train because it has to ‘hallucinate’ an entire object with just a noise/latent vector conditioning and no information of masked/damaged pixels. Thus, though, the latter school of approach has gained major attention among inpainting community, in this paper, we advocate the former genre of unsupervised approach(pixel values under mask never used). Being unsupervised is the merit of [29], but it also creates a run time bottleneck. The algorithm follows iterative gradient descent optimization for finding the ‘best matching’ noise prior corresponding to damaged image. Such iterative framework prohibits real time applications.

Figure 1: Learning and inferencing(inpainting) with our learned noise prior model. Step 1: Learn a GAN model. Step 2: Freeze GAN modules and learn to infer noise prior based on masked input image. Step 3: During inference, given a masked image, predict a matching noise vector and use pretrained GAN generator(G) to yield final output. The dashed arrows show flow of error gradients during training phase.

In this paper we primarily aim to massively accelarate inference runtime (we achieve 1500X speedup compared to [29]) with simultaneousness visual quality improvement by parametrically learning noise priors. Another issue with inpainting(both supervised and unsupervised) is multi modal completion possibility of a masked region. For example, a masked lip region of face may be completed as smiling or neutral. We show that it is possible to regularize the inpainted outputs with some structural priors. As an example, for a face, we can make use of the facial landmarks as priors. Lastly, single image inpainting models cannot be appreciable applied on videos. Though each frame might be visually pleasing, when viewed as a sequence, there are lot of jitter and flicker due to temporal inconsistency of models. We propose to subdue such inconsistencies with a recurrent net based grouped noise prior learning combined with a subsequence consistency constraint. Our contributions can be summarized as follows:

  1. Unsupervised data driven GAN noise prior prediction framework to convert the iterative paradigm of [29] to a single feed forward pipeline with visually better reconstruction and simultaneous massive speedup of inference time by 1500.

  2. Augmenting structural priors to improve GAN samples which eventually results in better reconstructions. Such priors also regularize GAN training to respect pose and size of objects.

  3. Pioneering effort towards GAN based sequence inpainting with a recurrent neural net based grouped prior learning for better temporal consistency of reconstructed sequences compared to both supervised and unsupervised benchmarks.

  4. A sub-sequence consistency loss to further improve temporal smoothness of reconstructed sequences

  5. We exhaustively validate our models on CelebA, SVHN, Standford Cars, CelebaHQ image datasets and VidTIMIT video dataset.

2 Related works

Traditional image inpainting methods[1, 4, 6, 7] broadly worked with matching patches and diffusion of low level features from unmasked sections to the masked region. These method mainly worked on synthesis of stationary textures of background scenes where it is plausible to find a matching patch from unmasked regions. However, complex objects lack such redundancy of appearance features and thus recent methods leverage hierarchical feature learning capability of deep neural nets to learn higher order semantics of a scene. Initial deep learning based methods [15, 28] were completely supervised and trained with conservative reconstruction loss. With the advent of GANs, a common practice [23, 11] has been to refine the blurry reconstructions by loss with an adversarial loss coming from a discriminator which is also simultaneously trained to distinguish real samples from inpainted samples. Notably, the first work within this paradigm of approach was Context Encoder(CE) [23] by Pathak et al.

, in which the authors tried to learn scene representation along with inpainting. Iizuka

et al. proposed ‘Globally and Locally Consistent Image Completion’ (GLCIC) in which a inpainter/generator network is pitted against two discriminators, one for gauging realism of entire image and the other for measuring fidelity of local reconstruction of masked patch region. Recently, Yu et al. [30] improved upon GLCIC, by incorporating contextual attention within inpainting network so that the net learns to leverage distant information from uncorrupted pixels. These methods have a common pipeline of fully supervised training stage followed by adversarial loss based refinement. Thus these methods are not fully unsupervised since paired examples(masked and unmasked) are required during training.

In this paper, we are advocating a fully unsupervised approach (information about the masked pixels not used anywhere in training pipeline) to inpainting pioneered by Yeh et al. [29]. In [29], the idea is to first train a GAN framework conditioned on only noise prior() sampled from some prior known distribution. At test time, since their method is completely unsupervised, the authors used an iterative gradient descent optimization to find the ‘best matching’ vector for the damaged image with the pre-trained generator and discriminator network of the GAN. However, this iterative optimization takes about 2.5 minutes/image and is thus not suitable for practical applications. We consider the framework of [29] as a baseline and seek to improve upon the inference time and reconstruction quality. In the process, we also achieve comparable performance to the contemporary hybrid trained methods.

Figure 2: Illustration of multi model image completion possibility of GAN based inpainting methods. Given a corrupted image, an unconditioned inpainting algorithms(top row) such as [29, 23, 11, 30]

samples from a uniform distribution of viable inpainted images. However, if conditioned by structural priors(bottom row), the sampling distribution is biased towards samples which preserve original facial pose and expression.

3 Background

3.1 GAN Basics

Proposed by Goodfellow et al.[8], a GAN model consists of two parametrized deep neural nets, viz., generator, , and discriminator, . The task of the generator is to yield an image, with a latent noise prior vector, , as input. is sampled from a known distribution, . A common choice [8] is, . The discriminator is pitted against the generator to distinguish real samples(sampled from ) from fake/generated samples. Specifically, discriminator and generator play the following game on :

(1)

With enough capacity, on convergence, fools at random [8].

3.2 Baseline GAN based unsupervised inpainting

We first review the unsupervised inpainting baseline of Yeh et al. [29]. Given a damaged image, , corresponding to an original image, , and a pre-trained GAN model, the idea is to iteratively find the ‘closest’ vector (starting randomly from ) which results in a reconstructed image whose semantics are similar to corrupted image. is optimized as,

(2)

where is the binary mask with zeros on masked region else unity, is the Hadamard operator and

is any loss function. Interesting to note is that the loss function never makes use of pixels inside the masked region. Upon convergence, the inpainted image,

, is given as, .

4 Proposed Method

4.1 Data driven Noise Prior Learning

Though the unsupervised characteristic of [29] is encouraging for the generative learning community, the iterative optimization is a major bottleneck in the pipeline. Instead of iteratively optimizing the noise prior,

, for each test image during runtime, we propose to learn an unsupervised offline parametric model,

, for predicting vector. The parameter set, , is optimized to minimize the following unsupervised losses:
Contextual Loss: This loss ensures that the predicted noise prior preserves fidelity with respect to the original unmasked regions.

(3)

Realism Loss: This loss ensures that the inpainted output lies near the original/real data manifold and is measured by the log likelihood of belongingness to real class assigned by the pre-trained discriminator

(4)

Gradient Difference Loss: Inspired by [21, 22] we also use the gradient difference loss imposed between the gradient (horizontal and vertical) matrices of original and reconstructed outputs. This compels the network to predict noise priors which yield high frequency retaining samples and also respects the gradients of the original scene.

(5)

Please note that the loss is still calculated on the unmasked regions only. In summary, parameter set, , is optimized to minimize the combined loss, ,

(6)

where ’s controls the relative importance of each loss factor. After convergence of training of , given a masked image, , mask, , we can get the inpainted output, , in one feed forward step instead of the iterative optimizations of [29]. Inpainted image, , is given by,

(7)

Though Eq. 2 and 6 are functionally same, prediction using a learned parametric network tends to perform better than ad hoc iterative optimization. This is because, with evolution of training, the network learns to adapt parameters to map images with closely matching appearances to similar vectors. Parameter update for a given image thus implicitly generalizes to images with similar characteristics.

4.2 Regularization with Structural Priors

Image inpainting intrinsically suffers from a multi modal completion problem. A given masked region has multiple plausible possibilities for completion. For example, consider Fig.2: for an unconstrained optimization setup, the masked region of the face can be inpainted with different facial expressions. From a single image inpainting point of view this might not be an issue. But in case of sequences, it is desirable to maintain a smooth flow of scene dynamics. A laughing face, for example, cannot suddenly be inpainted as a neutral frame. We propose to further regularize our network by augmenting structural priors. Structural priors can be any representation which captures the pose and size of the object to be inpainted and thereby compelling the network to yield outputs by respecting such priors. Such additional priors can be seen as conditional variables, , to the GAN framework. Formulation of Eq. 1

changes subtly to respect the joint distribution of real samples and conditional information. The modified game,

:

(8)

The noise prior predictor network, has to optimize by respecting the structural prior as an additional constraint.

In this paper, without any loss of generalization, we have considered face inpainting with semantic priors as facial landmarks automatically extracted in real time(5ms @ 256256 resolution) using the robust framework of Kazemi et al. [13] which achieves benchmark performance on face alignment.

4.3 Grouped Noise Prior Learning for Sequences

To our best knowledge, this is the first demonstration of GAN based completely unsupervised sequence inpainting. A naive approach of applying the formulation of Eq. 6 on sequences is to inpaint individual frames independently. However, such anapproach fails to learn the temporal dynamics of sequence and thereby yielding jittering effects. In this regard, for a sequence of

frames, we propose to use a Recurrent Neural Network (RNN) to jointly predict

vectors for a subset of frames at a time. RNN consist of a hidden state to summarize information observed upto that time step. The hidden state is updated after looking at the previous hidden state and the corrupted image(with an additional option to condition on structural priors), leading to more consistent reconstructions in terms of appearance. ames

Since, RNNs suffer from vanishing gradients problem

[3]

and are unable to capture long dependencies, we use Long Short Term Memory (LSTM)

[10] Networks. Fig. 3 shows our LSTM based framework architecture for jointly inpainting a group of frames. Let, be a group of corrupted successive frames. Initially, each frame is passed through a CNN module (same architecture of , except the last layer outputs instead of by ), to obtain the input sequence for the recurrent network . We obtain the predicted prior, , by feeding the hidden state, , of the recurrent network to a fully-connected layer. is then used for reconstructions, , with the help of the pre-trained generator, . We use the loss function in Eq. 6, averaged over the grouped window of frames to optimize the parameters of LSTM and the CNN descriptor network. Specifically, the grouped prior loss is defined by, ,

(9)

Please note, the parameters of pre-trained generator and discriminator are kept frozen. .

Figure 3: Grouped noise prior learning with a combined LSTM-CNN framework. Unlock sign means parameters to update.

4.4 Subsequence consistency loss

We further regularize training of the LSTM framework by an implicit subsequence consistency loss over a group of neighborhood frames. The motivation is that a group of adjacent frames in a video exhibit close coherence of appearance. Thus, we define a subsequence clique as a collection of adjacent frames and penalize if the appearances of the frames differ from each other. Disparity between two inpainted images, and can be approximated by Euclidean distance between their latent vectors (). We define the loss, as,

(10)

So, helps in learning the temporal dynamics while explicitly fosters temporal smoothness. If dominates then, the network will be penalized by because over smoothing of a sequence is not a true characterization of a real world sequence. The final loss function for the LSTM-CNN combined framework is given by ,

(11)

where sets relative importance of subsequence consistency. Please note, is applied only on a neighborhood of frames and not on entire sequence. Applying on entire sequence is not a true representation of temporal dynamics because we will be then penalizing appearance changes even over distant frames. On contrary, reducing means no explicit temporal consistency loss.

Figure 4: Visualization of initial inpainting solutions by the iterative framework (requires 1.5K in total) of [29] compared to our one feed forward pass network. Column 1: Original image; Column 2: masked image; Column 3: Initial solution by [29]; Column 4: Proposed one feed forward solution.
Figure 5: Proposed structural priors in GAN help in generating better samples compared to vanilla GAN. Random samples from proposed framework are structurally more consistent and complete. This is eventually important for better inpainting.

5 Results

5.1 Single Image Inpainting

We experiment on cropped SVHN[12], Standford Cars[16], CelebA[20] and CelebA-HQ[20]. SVHN crops are resized to 6464. On Standford Cars we use bounding box information to extract and resize cars to 6464. CelebA images are center cropped to 6464 and 128128. Celeb-HQ images are resized to 256256. On SVHN and Cars, we use the dataset provider’s test/train split. For CelebA and CelebA-HQ, we keep 2000 images for testing.

5.1.1 Importance of Learned Noise Prior:

The most important improvement that we achieve over [29] is a significant speedup during inferencing. In Fig. 4 we compare the initial solution of [29]

with our one shot feed forward solution. Without any mechanism to estimate noise prior from masked image, initial solutions of

[29] lie far from real data manifold and thereby mandating an iterative approach. Abiding by the suggestions in [29], each image requires 1500 test time iterations. Our approach adds just subtle amount of computation for the noise predictor network and a negligible overhead for the structural priors; thereby making our model almost 1500X faster compared to [29]. From Fig. 12, it is encouraging to see that even after the iterative optimization, visual quality of our method is usually superior than [29].

Figure 6: Structural priors enables GAN to disentangle facial pose and appearance cues. Left: Faces samples with same vector but different structural priors. Right: Faces sampled with different vectors for a given structural prior. Even if some keypoints are missing/occluded our model generates plausible textures.

5.1.2 Importance of Structural Priors:

In this paper, we have considered the special case of face inpainting with semantic priors as facial landmarks detected by the robust framework of [13]. We observed three fold benefits of leveraging such priors.
Improved GAN Samples and Reconstructions: Conditioning on structural priors forces the generator to yield samples closer to natural data manifold. Random samples from such conditioned generator are thus more photo-realistic (see Fig. 5) compared to the unconditioned vanilla version of GAN used by [29]. Towards this end, we visually compare (following the protocol in [26]) the quality of random samples from our proposed semantic conditioned GAN and [29] at resolutions of 6464 and 128128. For visual turing test, a human annotator is randomly shown total 200 images(100 real and 100 generated) in groups of 20 and asked to label each sample as real or fake. Decisions from 10 annotators are taken. On average, 64 resolution, the classification accuracy is 5.8% higher for DIP() and 4.2% higher() at 128128 resolution. Thus, human annotators found it more difficult to distinguish samples from our model compared to DIP.
Control of Pose and Expression: With structural priors, the generator learns to disentangle appearance and pose. A given semantic prior should force the generator to create a face with matching head pose and facial expression while two nearby vectors results in similar facial textures. In Figure 6(Top setting) we show such disentanglement learned by our model.
Greater structural fidelity to reference image: In Fig. 8, we show the importance of structural priors on top of learned noise priors. Reconstructions with only our proposed learned noise priors might be stand alone realistic but are not penalized for changing facial expressions. For example, a (masked) smiling face can be inpainted as a neutral face by only conditioning on a learned noise prior. However, if we constrain the model with structural priors, the reconstructions are more coherent in appearance and expression to the reference image. Such structural fidelity is key in achieving temporally more consistent sequence reconstructions as discussed in upcoming sections.

Figure 7: Comparative visualization of inpainting. In each set, column 1: original image, column 2: masked images, column 3: unsupervised baseline of [29], column 4: Proposed learned noise prior conditioned model(Eq. 6). Proposed reconstructions are usually better, yet our model is about 1500 faster than [29].
Figure 8: Benefit of structural prior augmented GAN based inpainting. In each sub figure, Column 1: Original image, Column 2: Masked image, Colimn 3: Inpainted by a GAN model conditioned on proposed learned noise prior Column 4:Inpainted by a GAN model conditioned on proposed learned noise + structural prior. Structural priors regularizes network to respect facial expression during reconstruction.
Figure 9: Visualizing consistency of inpainting synthetic sequences. A synthetic sequence is created by masking a given image with different corruption patterns. Ideally we want an inpainter to yield exactly same outputs for a given synthetic sequence.; Top: Masked synthetic sequence. Middle: Inpainted sequence with Yeh et al. [29]. Bottom: Proposed inpainted sequence with LSTM-CNN grouped prior. Proposed method yields more consistent sequences. Note, how [29] changes facial expressions in each frame. Proposed framework uses context from neighboring frames to improve group wise coherence. Note how lips regions are coherent even if that region is masked in some frames.
Figure 10: Benefit of subsequence consistency loss(Eq.10) augmented with grouped prior loss(Eq.11). Left: A synthetic sequence in which same image(sample from CelebA-HQ @ 256256) is masked differently. Ideally we want a model to inpaint both frames identically. Middle:Inpainting with proposed LSTM grouped prior. Right: Inpainting with LSTM grouped prior + proposed subsequence consistency loss. LSTM grouped prior maintains the similarity of facial expressions(right face is inpainted neutral even though lip region was masked) but suffers from subtle texture changes(see the highlighted eye regions). Augmentation of consistency loss reduces such appearance disparities. Best viewed when zoomed up.

5.1.3 Comparison to Hybrid Benchmarks

Though our method is unsupervised, for completeness of the paper, we also compare with recent hybrid inpainting benchmarks of [23, 11, 30, 19]. To scale up our GAN model to 256256, we follow the progressive training strategy of [12]. See Fig. 13 for visual examples.
Is Supervised Phase Mandatory ?
To seek an answer to this, we trained the models of [23, 11, 30, 19] without any loss but only adversarial loss. We observe that these methods fail to perform in absence of loss. In Fig. 12, we show some visual examples.

5.2 Sequence Inpainting

5.2.1 Temporal consistency and Synthetic Sequences

Recent deep learning based inpainting works have only been restricted for single image inpainting. The genre of video has not received interest. Even if there are some works [18, 27], the reported results are in terms of per frame PSNR which does not take into account the temporal consistency/dynamics of scene reconstructions. For example, it is very annoying for a viewer if the stationary portions of a series of frames are reconstructed with different appearances on each frame and thereby creating jitter effects.

We dedicate this section to analyze the temporal consistencies of different methods on synthetic sequences. A synthetic sequence, , of length is formed by taking a single image, , and masking it with different/same corruptions masks. An ideal inpainting model should be agnostic of the corruption masks and yield identical reconstructions for all the frames. We define temporal consistency, , as the mean pairwise PSNR between all possible pairs() of inpainted frames within a synthetic sequence, , of length, ;

(12)

Eq. 12 allows enumerating the consistency of a generative model. Ideally, we want

=0. Please note that this evaluation is not possible on real videos because the transformation from one frame to another is not known and thus it is not possible to align the frames to a single frame of reference without incorporating interpolation noise with motion compensator

[5]. In Table 1 we compare the consistencies with contemporary benchmarks. We see progressive improvement of consistency with the addition of LSTM-grouped prior and structural priors. Note, even the hybrid (supervised + adversarial) benchmarks manifests higher inconsistencies with exception of [19] because it jointly trains the network with inpainting loss and face parsing loss. This bolsters the hypothesis that a prior knowledge of object structure helps in inpainting.

Temporal Consistency (dB) on Synthetic Sequences PSNR (dB) on Single Images
SVHN @ 64 CelebA @ 128 Cars@ 64 SVHN@ 64 CelebA@ 128 CelebA-HQ @ 256
Genre Method RC RF RCh RC RF RCh RC RF RCh RC RF RCh RC RF RCh RC RF RCh
CE[23] 20.5 22.0 21.3 20.1 21.4 19.8 14.2 14.8 13.9 21.9 22.0 21.8 24.3 25.0 24.0 17.8 18.2 17.7
Sup.+
GLCIC[11] 21.6 22.0 21.7 20.9 22.1 20.1 15.9 16.9 15.7 23.2 24.0 22.7 27.9 28.0 27.2 23.8 23.7 22.7
Adv
GIP[30] 21.7 22.9 23.0 21.1 22.2 21.3 16.1 17.2 15.9 23.9 24.1 23.0 28.2 28.7 27.7 24.1 24.3 23.6
GFC [19] - - - 23.1 24.9 23.8 - - - - - - 28.0 28.2 27.1 - - -
Yeh et al.[29] 22.5 22.9 22.8 21.9 22.2 21.1 13.9 14.0 13.2 20.9 21.2 21.0 23.0 23.1 21.4 15.7 16.0 13.1
Proposed:M1
23.8 25.9 24.2 22.6 24.0 23.0 15.2 15.6 15.1 23.0 23.8 22.5 24.8 25.2 23.7 20.1 20.4 18.9
Unsup
Proposed:M2
25.0 26.9 25.9 23.8 25.6 24.2 - - - - - - - - - - - -
Proposed:M3
- - - 24.1 27.4 27.1 - - - - - - 27.4 27.9 26.4 22.6 23.0 22.0
Proposed:M4
- - - 26.3 29.8 29.4 - - - - - - - - - - - -
Proposed:M5
- - - 27.6 28.0 26.9 - - - - - - - - - - - -
Table 1: Comparison of temporal consistency in dB(Eq.12)(left section) on synthetic sequences (2000 for CelebA, 4500 for SVHN) by competing algorithms. On right section we also report the mean PSNR of inpainting on different datasets. Higher values of consistency are better. We compare five cases of our proposed framework; M1: Independent Learned noise() prior, M2: LSTM-CNN grouped learned noise prior, M3: Independent Structural prior, M4: LSTM-CNN grouped noise prior + Structural prior, M5: LSTM-CNN grouped noise prior + Structural prior + Subsequence Consistency Loss. Masks used are RC(Random Central): random 50-70% center mask, RF(Random Freehand): random 50% mask by freehand mask and RCh(Random Checkboard): 50% masked by random checkboard grid masks. In summary: Our unsupervised models have, in general, better temporal consistency and comparable PSNR compared to hybrid(supervised + adversarial) benchmark models.
Figure 11: Inpainting on VidTIMIT sequences by [29](3 row) and proposed method: version M5(4 row). See Table1 for definition of M5.

5.2.2 Importance of Subsequence Consistency Loss

In Fig. 10 we show a synthetic sequence, in which a same face is masked differently. Proposed LSTM-grouped prior based reconstruction is successful is maintaining the overall same facial expression but fails to maintain subtle textural consistencies as shown in highlighted insets. Subsequence consistency loss helps in maintaining such subtle texture coherence which results in improved temporal consistency. Again, please note, these difference are much more easier to illustrate(and visualize) in such synthetic sequences than on real videos.

5.2.3 Application on Real Videos:

The experiments with synthetic sequences taught us three lessons, viz., a)LSTM-CNN based grouped noise prior learning is better than independent noise prior learning b) structural prior fosters in higher fidelity and c) subsequence consistency loss helps in preserving subtle texture details. With these knowledge, we proceed to demonstrate first attempt towards GAN based inpainting on real videos. For this, we selected the VidTIMIT dataset[24] which consists of video recordings of 43 subjects each narrating 10 different sentences. Images of CelebA dataset are of superior resolution than those of VidTIMIT. Due to this intrinsic difference of data distribution we finetuned our pretrained(trained on CelebA) models on randomly selected 33 subjects of VidTIMIT. Remaining videos of 10 subjects were kept for testing inpainting performances. In total, there are total 9600 frames for testing. All faces center cropped to 128128.

Figure 12: Visualization of inpainting by contemporary baselines(GLCIC[11], GIP[30], GFC [19]) trained without supervised reconstruction loss but only with unsupervised adversarial loss. It is evident that training with only unsupervised loss diminishes the efficacy of the methods. Our unsupervised method however consistently shows appreciable performance.

Evaluating Video Quality: MOVIE metric [25]: Traditional metrics such as PSNR and structure similarity (SSIM) are not a true reflection of human visual perception measure as shown in recent studies [30, 17]. Also, these metrics donot consider any temporal information. For this, we preferred to use the MOVIE metric [25]. MOVIE is a spatio-spectrally localized framework for assessing a video quality by considering spatial, temporal and spatio-temporal aspects. A lower value of MOVIE metric indicates a better video. MOVIE metric was found to appreciably correlate with human perception. In Table 2, we compare the average test set MOVIE metric. All variants of our proposed framework outperforms [29]. With independent noise prior model, we get better performance than [23] and comparable performance to [11, 30]. Addition of LSTM-grouped prior and structural prior boosts our performance with further improvement coming from subsequence consistency loss. It is interesting to see that even if we compute structural prior on third(and reuse in between), there is subtle degradation of performance. We show some video snippets in Fig. 11.

Competing Proposed
[29] [23] [11] [30] [19] M1 M2 M4 M5 M6
0.68 0.60 0.52 0.42 0.35 0.48 0.31 0.23 0.18 0.22
Table 2: Comparison of MOVIE metric[25] averaged over test sequences of VidTIMIT dataset. Lower value of metric is better for perceptual quality of a reconstructed sequence. Refer to Table 1 for definition of proposed methods M1-M5. M6 is framework of M5 but structural priors are evaluated every alternate third frame.

6 Discussion and Conclusion

In this paper, we showed the importance of priors in GANs for pushing the performance envelope of unsupervised inpainting framework of [29] with better inpainting quality and almost 1500

speedup. The objective of this paper was to purposefully abstain from the contemporary practice of hybrid(supervised + unsupervised) training and focus on creating a faster unsupervised framework comparable visual performance. Our proposed framework with grouped LSTM-CNN guided noise prior and structural prior manifests better spatio-temporal characteristics than contemporary hybrid baselines. This shows that current single image inpainting methods have further scopes of improvement on videos and the frameworks used by us in this regard can be exploited by those algorithms also. Given the current state of GAN research, it is not expected that a completely unsupervised GAN based inpainter can work on natural images such as ImageNet or Places2 dataset(which hybrid methods are capable of due to supervised

). However, as our understanding on GANs improve and we enable GAN models to generate natural scenes, the methods of this paper shall seamlessly fit in those scenarios as well.

Figure 13: Visual comparison of inpainting with hybrid methods of CE[23], GLCIC[11], GFC[19], GIP [30]. Our unsupervised method performs reasonably comparable even though it is totally unsupervised. Note, that at 256256(CelebA-HQ), performance of unsupervised baseline of [29] deteriorates drastically. Adapting the progressive training strategy in [12] enables our GAN to mimic natural face distribution more faithfully and thereby enabling appreciable inpainting performance.

References

  • [1] C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, and J. Verdera (2001) Filling-in by joint interpolation of vector fields and gray levels. IEEE transactions on image processing 10 (8), pp. 1200–1211. Cited by: §2.
  • [2] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics (ToG) 28 (3), pp. 24. Cited by: §1.
  • [3] Y. Bengio, P. Simard, and P. Frasconi (1994) Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks 5 (2), pp. 157–166. Cited by: §4.3.
  • [4] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester (2000) Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 417–424. Cited by: §2.
  • [5] J. Caballero, C. Ledig, A. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi (2016)

    Real-time video super-resolution with spatio-temporal networks and motion compensation

    .
    CVPR. Cited by: §5.2.1.
  • [6] A. A. Efros and W. T. Freeman (2001) Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 341–346. Cited by: §2.
  • [7] A. A. Efros and T. K. Leung (1999) Texture synthesis by non-parametric sampling. In ICCV, pp. 1033. Cited by: §2.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In NIPS, pp. 2672–2680. Cited by: §1, §3.1.
  • [9] J. Hays and A. A. Efros (2007) Scene completion using millions of photographs. In ACM Transactions on Graphics (TOG), Vol. 26, pp. 4. Cited by: §1.
  • [10] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: §4.3.
  • [11] S. Iizuka, E. Simo-Serra, and H. Ishikawa (2017) Globally and locally consistent image completion. ACM Transactions on Graphics (TOG) 36 (4), pp. 107. Cited by: §1, Figure 2, §2, Figure 12, §5.1.3, §5.2.3, Table 1, Table 2, Figure 13.
  • [12] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018) Progressive growing of gans for improved quality, stability, and variation. In ICLR, Cited by: §5.1.3, §5.1, Figure 13.
  • [13] V. Kazemi and J. Sullivan (2014) One millisecond face alignment with an ensemble of regression trees. In CVPR, pp. 1867–1874. Cited by: §4.2, §5.1.2.
  • [14] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1.
  • [15] R. Köhler, C. Schuler, B. Schölkopf, and S. Harmeling (2014) Mask-specific inpainting with deep neural networks. In

    German Conference on Pattern Recognition

    ,
    pp. 523–534. Cited by: §2.
  • [16] J. Krause, M. Stark, J. Deng, and L. Fei-Fei (2013) 3d object representations for fine-grained categorization. In

    Proceedings of the IEEE International Conference on Computer Vision Workshops

    ,
    pp. 554–561. Cited by: §5.1.
  • [17] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network.. In CVPR, Vol. 2, pp. 4. Cited by: §5.2.3.
  • [18] C. Li, Y. Ding, B. Yu, M. Xu, and Q. Zhang (2018) Inpainting of continuous frames of old movies based on deep neural network. In 2018 International Conference on Audio, Language and Image Processing (ICALIP), pp. 132–137. Cited by: §5.2.1.
  • [19] Y. Li, S. Liu, J. Yang, and M. Yang (2017) Generative face completion. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1, pp. 3. Cited by: Figure 12, §5.1.3, §5.2.1, Table 1, Table 2, Figure 13.
  • [20] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738. Cited by: §5.1.
  • [21] M. Mathieu, C. Couprie, and Y. LeCun (2016) Deep multi-scale video prediction beyond mean square error. ICLR. Cited by: §4.1.
  • [22] D. Nie, R. Trullo, J. Lian, C. Petitjean, S. Ruan, Q. Wang, and D. Shen (2017) Medical image synthesis with context-aware generative adversarial networks. In MICCAI, pp. 417–425. Cited by: §4.1.
  • [23] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros (2016) Context encoders: feature learning by inpainting. In CVPR, pp. 2536–2544. Cited by: §1, Figure 2, §2, §5.1.3, §5.2.3, Table 1, Table 2, Figure 13.
  • [24] C. Sanderson and B. C. Lovell (2009) Multi-region probabilistic histograms for robust and scalable identity inference. In International Conference on Biometrics, pp. 199–208. Cited by: §5.2.3.
  • [25] K. Seshadrinathan and A. C. Bovik (2010) Motion tuned spatio-temporal quality assessment of natural videos. IEEE transactions on image processing 19 (2), pp. 335–350. Cited by: §5.2.3, Table 2.
  • [26] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb (2017) Learning from simulated and unsupervised images through adversarial training. CVPR, pp. 2107–2116. Cited by: §5.1.2.
  • [27] X. Sun, R. Szeto, and J. J. Corso (2018) A temporally-aware interpolation network for video frame inpainting. arXiv preprint arXiv:1803.07218. Cited by: §5.2.1.
  • [28] L. Xu, J. S. Ren, C. Liu, and J. Jia (2014)

    Deep convolutional neural network for image deconvolution

    .
    In Advances in Neural Information Processing Systems, pp. 1790–1798. Cited by: §2.
  • [29] R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do (2017) Semantic image inpainting with deep generative models. In CVPR, pp. 5485–5493. Cited by: item 1, §1, §1, Figure 2, §2, §3.2, Figure 4, §4.1, Figure 11, Figure 7, Figure 9, §5.1.1, §5.1.2, §5.2.3, Table 1, Table 2, Figure 13, §6.
  • [30] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang (2018) Generative image inpainting with contextual attention. In CVPR, Cited by: Figure 2, §2, Figure 12, §5.1.3, §5.2.3, Table 1, Table 2, Figure 13.