DeepAI
Log In Sign Up

Multi-feature Co-learning for Image Inpainting

05/21/2022
by   Jiayu Lin, et al.
0

Image inpainting has achieved great advances by simultaneously leveraging image structure and texture features. However, due to lack of effective multi-feature fusion techniques, existing image inpainting methods still show limited improvement. In this paper, we design a deep multi-feature co-learning network for image inpainting, which includes Soft-gating Dual Feature Fusion (SDFF) and Bilateral Propagation Feature Aggregation (BPFA) modules. To be specific, we first use two branches to learn structure features and texture features separately. Then the proposed SDFF module integrates structure features into texture features, and meanwhile uses texture features as an auxiliary in generating structure features. Such a co-learning strategy makes the structure and texture features more consistent. Next, the proposed BPFA module enhances the connection from local feature to overall consistency by co-learning contextual attention, channel-wise information and feature space, which can further refine the generated structures and textures. Finally, extensive experiments are performed on benchmark datasets, including CelebA, Places2, and Paris StreetView. Experimental results demonstrate the superiority of the proposed method over the state-of-the-art. The source codes are available at https://github.com/GZHU-DVL/MFCL-Inpainting.

READ FULL TEXT VIEW PDF

page 1

page 3

page 5

page 7

08/22/2021

Image Inpainting via Conditional Texture and Structure Dual Generation

Deep generative approaches have recently made considerable progress in i...
07/29/2022

Reference-Guided Texture and Structure Inference for Image Inpainting

Existing learning-based image inpainting methods are still in challenge ...
03/18/2021

Generating Diverse Structure for Image Inpainting With Hierarchical VQ-VAE

Given an incomplete image without additional constraint, image inpaintin...
09/17/2022

Delving Globally into Texture and Structure for Image Inpainting

Image inpainting has achieved remarkable progress and inspired abundant ...
08/09/2020

Recurrent Feature Reasoning for Image Inpainting

Existing inpainting methods have achieved promising performance for reco...
12/15/2020

Image Inpainting Guided by Coherence Priors of Semantics and Textures

Existing inpainting methods have achieved promising performance in recov...
09/21/2022

Progressive with Purpose: Guiding Progressive Inpainting DNNs through Context and Structure

The advent of deep learning in the past decade has significantly helped ...

I Introduction

Image inpainting [2] aims at reconstructing damaged regions or removing unwanted regions of images while improving the visual aesthetics of the inpainted images, which has been widely used in low-level vision tasks, such as restoring corrupted photos and object removal. The main challenge of image inpainting is how to generate reasonable structures and realistic textures. Traditional image inpainting, such as patch-based methods [5, 1], fill out the hole with the most similar patch as the to-be-inpainted patch by searching on undamaged region. Since the similarity is computed on pixel domain of the image, these methods fail to generate objects with strong semantic information.

Deep learning-based methods have shown remarkable performance on image inpainting tasks [23, 22, 25, 30]. They can generate semantically consistent results by understanding high-level features of the images. Among these methods, the encoder-decoder architecture has been widely developed since this architecture can extract semantic features and generate visually pleasing contents even if the hole is large. The variants of encoder-decoder architecture like U-Net [32] are also developed for image inpainting, which can enhance the feature connection between the encoder and the decoder. For irregularly corrupted images, Liu et al. [17] and Yu et al. [35] proposed to force the network to exploit valid pixels only, obtaining considerable performance. Nevertheless, the above methods do not take full use of the image structure features so that their generative networks are difficult to generate the reasonable structures. This results in blur and artifacts around the hole boundaries.

Fig. 1: Visual results of the proposed method. From left to right: (a) Input images with holes. (b) Inpainted images without SDFF. (c) Inpainted images without BPFA. (d) Inpainted images with SDFF and BPFA. (e) Ground-truth images. We can observe that the inpainted images by our method have visually pleasing content.

To overcome the above problems, several methods were proposed to use structural knowledge. For example, Nazeri et al. [22] designed EdgeConnect with a two-stage model: edge generator and image completion generator. The edge generator is trained to predict the full edge map, while the image completion generator exploits the edge map as the structure prior to reconstruct the final image. Similar to EdgeConnect, Ren et al. [25] designed StructureFlow by using the edge-preserved smoothing technique [31], making the structure reconstruction contain more information, such as image color. However, these methods do not simultaneously employ the structure and texture features, thereby leading to the inconsistent structures and textures of the output images. Based on this observation, Liu et al. [18] designed MED to learn structures and textures separately by a mutual encoder-decoder, in which the deep-layer features are learned as structures, meanwhile the shallow-layer features are learned as textures. Although the consistency between structures and textures is improved, there have still been two major shortcomings: 1) The relationship between the structures and textures is not fully considered, resulting in limited consistency between them. 2) The context of an image is not sufficiently utilized, leading to insufficient connection from local feature to overall consistency.

Motivated by these two shortcomings, we propose a multi-feature co-learning method for image inpainting. Specifically, we present two novel modules: 1) A Soft-gating Dual Feature Fusion (SDFF) module is designed to reorganize structure and texture features so that their consistency is greatly enhanced. 2) A Bilateral Propagation Feature Aggregation (BPFA) module is designed to capture the connection between contextual attention, channel-wise information, and feature space. This BPFA greatly enhances the connection from local feature to overall consistency. The major contributions of the proposed method are as follows:

  • We propose a novel SDFF module. With SDFF, the blur and artifacts around the holes are significantly reduced.

  • We propose a novel BPFA module. With BPFA, the inpainted images show more rational structures and more detailed textures.

  • Extensive experimental results demonstrate the superiority of the proposed method over the state-of-the-art.

Ii Related Works

Traditional image inpainting methods are usually divided into two main categories: diffusion-based methods [5, 2] and patch-based methods [6, 3]. The first one propagates the appearance of adjacent content to fill out the missing regions. However, due to the limitation of search mechanism on adjacent content, there are obvious artifacts within images when facing large area masks. The second one fills in the missing region with the most similar patch as the to-be-inpainted patch. Although they can capture long-distance information, it is difficult to generate semantically reasonable images due to the lack of high-level structure understanding.

Deep learning-based methods [34, 11, 37, 20, 24, 16] have been widely explored in the field of image inpainting. Pathak et al [23] firstly developed encoder-decoder architecture and adversarial training for image inpainting. Iizuka et al [9] overcame the information bottleneck defect by introducing a series of dilated convolution layers. Recently, Nazeri et al [22] proposed EdgeConnect to generate possible edges and fill in the holes with precondition information. Like EdgeConnect, Xiong et al [30] designed a similar model by adopting a contour generator as structure prior instead of the edge generator. Ren et al [25] utilized the edge-preserving smoothing method to obtain sharp edges and low-frequency structures. Yang et al [33] proposed a multi-task learning framework by introducing structure embedding to generate refined structures. Liu et al [18] designed a mutual encoder-decoder network to learn the structure and texture features separately. Peng et al [24] presented the conditional autoregressive network and structure attention module, which can learn the distribution of structure features and capture the distance relationship between structures, respectively. However, the above-reviewed methods do not fully consider the relationship between the structure and texture features, it is difficult to generate the images with reasonable structures and sophisticated textures.

Iii Proposed method

Fig. 2: Description of the proposed pipeline. Generator: We propose a variant of U-Net architecture to jointing learn image structures and textures. The Soft-gating Dual Feature Fusion (SDFF) module and Bilateral Propagation Feature Aggregation (BPFA) module are designed to refine the generated structures and textures. Discriminator: We adopt the local and global discriminators to ensure from local to global content consistency.

The overall pipeline of the proposed method is shown in Fig. 2

, which is built upon the generative adversarial network. The generative network consists of mutual encoder-decoder, structure and texture branches, Soft-gating Dual Feature Fusion (SDFF), and Bilateral Propagation Feature Aggregation (BPFA). The discriminative network consists of global discriminator and local discriminator. By convention, the generative network aims to generate the inpainted images by co-learning image structures and textures, while the discriminative network aims to distinguish between the inpainted images and real ones. In the following, we describe the proposed network architecture and loss functions in detail.

Iii-a Generator

The generator can be divided into the five parts: 1) The encoder consists of six convolutional layers. The three shallow-layer features are reorganized as texture features to represent image details. Meanwhile, the three deep-layer features are reorganized as structure features to represent image semantics. 2) We adopt two branches to separately learn the structure and texture features. 3) We design a SDFF module to fuse the structure and texture features generated by the above two branches. 4) We design a BPFA module to equalize the features between contextual attention, channel-wise information, and feature space. 5) The skip connection is used to supplement decoder features, which helps synthesize structure and texture branches to produce more sophisticated images.

Structure and Texture Branches. The texture feature reorganized by shallow-layer convolution is denoted as and the structure feature reorgnized by deep-layer convolution is denoted as

. In each branch, three parallel streams are used to fill out the corrupted regions at multiple scales. For each stream, we replace all the vanilla convolutions with padding based partial convolution in order to better fill in irregular holes. Note that each stream consists of five convolutional layers, and the convolutional kernel sizes of the three streams are

, and , respectively. We can obtain the filled features by first combining the output feature maps of the three streams and then mapping the combined features into the same size of the input feature. Here, we denote and as the outputs of the structure and texture branches, respectively. To ensure that the two branches focus on structures and textures respectively, we use two reconstruction losses, denoted as and respectively. The pixel-wise loss is defined as:

(1)
(2)

where is the convolution operation with the kernel size of 1, which aims to map and to two color images respectively. and denote the ground-truth image and its structure image, respectively. We follow [25] by using an edge-preserving smoothing method [31] to generate .

Fig. 3: Description of Soft-gating Dual Feature Fusion, which can effectively fuse the structure and texture features.

Soft-gating Dual Feature Fusion (SDFF). This module is designed to better exchange the structure features and texture features generated by the above two branches, respectively. The exchange is implemented by utilizing a soft gating way to dynamically adjust the fusion ratio between the structure and texture features. Fig. 3 illustrates the proposed SDFF module. Specifically, in order to construct the structure-guided texture features, we utilize the soft gating to control a degree of refining the texture information. The soft gating can be defined as:

(3)

where is a convolution layer with the kernel size of 3, is a squeeze and excitation operation [8] to capture important channel information, and

is the Sigmoid activation function. With soft gating

, we can dynamically merge into by

(4)

where and are two learnable parameters, and denote element-wise multiplication and addition respectively.

Similarly, the texture-guided structure features can be calculated as follows:

Fig. 4: Description of the Bilateral Propagation Feature Aggregation (BPFA) module. It can enhance the connection from local feature to overall consistency.
(5)
(6)

where is a convolution layer with the kernel size of 3 and is a learnable parameter. Thus, we can concatenate and , and use a convolution layer with the kernel size of 1 to generate the integrated feature map :

(7)

Bilateral Propagation Feature Aggregation (BPFA). This module is designed to co-learn contextual attention, channel-wise information, and feature space so as to enhance the overall consistency. Fig. 4 illustrates the proposed BPFA module in detail. Specifically, to capture channel-wise information, we use the Selective Kernel Convolution module of SKNet [14] to generate the feature map . Inspired by [34], we introduce the Contextual Attention (CA) module to capture the correlation between feature patches. For a given input feature , we divide it into non-overlapping patches with size 3

3 and calculate the cosine similarity between these patches as:

(8)

where and are the -th and -th patches of the input feature , respectively. We utilize the Softmax function to get the attention score of each pair of patches:

(9)

where is the total number of patches of the input feature . Next, the attention score is used to compute the feature patches by

(10)

The reconstructed feature map can be obtained by directly reorganize all the feature patches.

In the range and spatial domains, we introduce the Bilateral Propagation Activation (BPA) module to generate the feature maps based on the range and spatial distances. The feature map of the range domain can be calculated as:

(11)
(12)

where we use the unfold function of PyTorch to reshape

to two kinds of vectors, which are of

and dimensions. Here, is the -th output channel of the input feature , is a neighboring channel at position around position . The pairwise function is dot-product similarity. The operations from the unfold function to represent Eq. (12). is a neighboring region of position and its size is set to . For a given , can be seen as the Softmax computation along dimension . is the number of channels in . Similarly, Eq. (11) represent the operations from on until the position . The feature map of the spatial domain can be calculated as:

(13)

where we use the unfold function of PyTorch to reshape to a vector, which is of dimension. is a Gaussian function to adjust the spatial contributions from neighboring region. We explore in a neighboring region for global propagation. In the experiment, is set to the same size as the input feature. Eq. 13 represent the operations from the unfold function to . Therefore, we can obtain the feature maps and by the spatial and range similarity measurement methods, respectively. We can see that the bilateral propagation considers both local consistency via and global consistency via . Each feature channel can be computed by

(14)

where denotes a convolution layer with the kernel size of 1. Next, the reconstructed feature map is obtained by aggregating all feature channels (). Finally, and are concatenated and mapped to by

(15)

where is a convolution layer with the kernel size of 1.

(a) Input (b) GC [35] (c) EC [22] (d) MED [18] (e) RFR [13] (f) DSI [24] (g) Ours (h) GT
Fig. 5: Visual comparison on the Places2 (the first row) and Paris StreetView (the second row) datasets.
(a) Input (b) CE [23] (c) CA [34] (d) SH [32] (e) MED [18] (f) Ours (g) GT
Fig. 6: Visual comparison on the CelebA dataset.

Iii-B Discriminator

The discriminator consists of the global critic network and local critic network, which can ensure the local and global content consistency. Each critic network includes six convolution layers with the kernel size of 4 and stride of 2, and use the Leaky ReLu with the slope of 0.2 for all but the last layer. Furthermore, the spectral normalization is adopted in our network to achieve stable training.

Iii-C Loss Function

Our network is trained with a series of loss functions, including pixel reconstruction loss, perceptual loss, style loss, and adversarial loss so that the finally generated image looks more visually realistic.

Pixel Reconstruction Loss. We adopt the distance as the reconstruction loss from two aspects. The first aspect is to supervise the structure and texture branches. The corresponding loss functions are formulated in Eqs. (1) and (2), respectively. The second aspect is to measure the similarity between the final output result and the ground-truth image by

(16)

Perceptual Loss. We utilize the perceptual loss to capture the high-level semantics [10] by computing the distance between the feature spaces of and

through ImageNet-pretrained VGG-16 backbone, which can be written as

(17)

where denote the five activation maps from VGG-16, which are ReLu1_1, ReLu2_1, ReLu3_1, ReLu4_1 and ReLu5_1.

Style Loss. We introduce the style loss to mitigate style differences, which is defined as:

(18)

where denotes the Gram matrix constructed from the above-mentioned five activation maps.

Adversarial Loss. The relativistic average least squares adversarial loss is to ensure the local and global content consistency. For the generator, the adversarial loss is defined as:

(19)

where ,

denotes the local or global discriminator without the last Sigmoid function, and

denotes a pair of the ground-truth and output images.

Total Loss. The total loss of the proposed method can be obtained by

(20)

where , , , , and are the tradeoff parameters, and we empirically set , , , , and .

Iv Experimental Results

In our experiment, three public datasets are used for the verification, including Places2 [38], Paris StreetView [4], and CelebA [21]. We follow the training, testing, and validation splits of these three datasets themselves. The irregular masks are taken from PConv [17]

and classified based on the ratios of the hole to the entire image with an increment of 10%. Our network is built on the PyTorch framework, trained on a single NVIDIA 2080 Ti GPU (11GB), and optimized by the Adam optimizer with a learning rate of

. The CelebA, Paris StreetView, and Places2 models require around 10, 55, and 100 epochs, respectively. All the masks and images are resized to

.

Metrics
Mask Ratio 10-20% 20-30% 30-40% 40-50% 10-20% 20-30% 30-40% 40-50% 10-20% 20-30% 30-40% 40-50% 10-20% 20-30% 30-40% 40-50%
GC [35] 29.16 26.35 24.24 22.35 0.928 0.880 0.791 0.704 0.0116 0.0195 0.0283 0.0391 28.31 39.01 49.81 62.73
EC [22] 30.44 27.46 25.69 24.02 0.939 0.886 0.830 0.759 0.0109 0.0186 0.0259 0.0350 18.41 31.00 42.03 54.85
MED [18] 30.88 27.54 25.40 23.55 0.945 0.890 0.828 0.750 0.0101 0.0180 0.0262 0.0365 20.18 36.68 50.14 65.72
RFR [13] 31.04 27.77 25.77 24.09 0.945 0.893 0.836 0.768 0.0091 0.0166 0.0241 0.0330 16.22 29.32 41.05 53.22
DSI [24] 31.02 27.59 25.45 23.61 0.945 0.890 0.829 0.752 0.0100 0.0179 0.0260 0.0359 15.79 28.94 40.87 54.73
Ours 32.24 28.68 26.23 24.32 0.957 0.912 0.854 0.783 0.0086 0.0153 0.0233 0.0326 16.13 29.14 42.23 58.01
TABLE I: Performance comparison on the Places2 dataset.
Metrics
Mask Ratio 10-20% 20-30% 30-40% 40-50% 10-20% 20-30% 30-40% 40-50% 10-20% 20-30% 30-40% 40-50% 10-20% 20-30% 30-40% 40-50%
GC [35] 30.77 27.71 25.58 23.81 0.943 0.897 0.839 0.782 0.0158 0.0234 0.0313 0.0409 28.44 39.91 52.72 69.26
EC [22] 31.08 28.08 25.95 24.30 0.949 0.906 0.848 0.789 0.0110 0.0194 0.0283 0.0390 22.22 38.69 55.85 72.43
MED [18] 32.28 28.75 26.08 24.32 0.966 0.925 0.877 0.813 0.0105 0.0190 0.0282 0.0387 20.29 30.79 48.91 67.42
RFR [13] 31.63 28.61 26.63 24.89 0.957 0.920 0.872 0.825 0.0152 0.0220 0.0296 0.0388 18.07 30.81 42.61 51.80
DSI [24] 31.42 28.21 26.04 24.07 0.953 0.912 0.866 0.799 0.0121 0.0197 0.0285 0.0393 18.09 33.36 47.80 60.15
Ours 33.68 30.29 27.34 25.28 0.971 0.940 0.889 0.828 0.0090 0.0152 0.0240 0.0336 13.83 27.32 45.44 56.07
TABLE II: Performance comparison on the Paris StreetView dataset.
Metrics
CE [23] 25.12 0.899 0.0352 3.84
CA [34] 24.54 0.887 0.0286 5.01
SH [32] 26.03 0.901 0.0245 1.71
MED [18] 25.92 0.914 0.0241 1.59
Ours 26.38 0.922 0.0223 1.40
TABLE III: Performance comparison on the CelebA dataset.

Iv-a Performance comparison with state-of-the-art

We compare the proposed method with eight state-of-the-art methods, including GC [35], EC [22], MED [18], RFR [13], DSI [24], CE [23], CA [34] and SH [32]

. For the evaluation metrics, we use four common metrics: PSNR, SSIM, MAE and FID (Fréchet Inception Distance)

[7]. For fair comparison, we use the same experimental setup for all the compared methods. The experiments are conducted on two types of damaged images containing center hole and irregular hole. In the following, we present the experimental results and analysis in detail.

For images with irregular holes, we take the Places2 [38] and Paris StreetView [4] datasets for evaluation, and select several representative methods GC [35], EC [22], MED [18], RFR [13] and DSI [24] for performance comparison. Meanwhile, we use the same validation images as the MED method. Experimental results are shown in Tables I and II. We can see from the two tables that our method achieves significant improvement over the compared methods in terms of the PSNR, SSIM, and MAE metrics. This is due to the fact that our method designs two effective multi-feature fusion techniques, which not only exploits the connection between the structure and texture features, but also considers the relationship within image context. For the FID metric, our method still shows competitive performance on the images with small damaged region. When evaluated on the images with large damaged region, our method exhibits minor performance degradation. The reason might be as follows. The FID metric aims to measure the distance between the feature spaces of two groups of images. While the proposed method does not introduce the structure prior in the encoding stage, thus enlarging the distance between the feature spaces of the generated output and ground-true images. Besides, we can observe from Fig. 5 that the images inpainted by the proposed method have better visual quality than those of all the compared methods.

For images with center hole, we compare the proposed method with four typical methods including CE [23], CA [34], SH [32] and MED [18]. The performance is evaluated on 10,000 images selected randomly from the CelebA [21] validation dataset. The result is shown in Table III. We can see that the proposed method still performs best. Especially for the FID metric, our method can better infer the missing structure and texture of an image compared with its competitors. The subjective quality of the inpainted images is shown in Fig. 6. It can be observed that our method demonstrates its effectiveness in dealing with the center hole images.

Metrics
w/o SDFF 27.15 0.883 0.0244 46.33
w/o CA 27.08 0.879 0.0249 44.56
w/o SKNet 27.17 0.885 0.0242 45.61
w/o BPFA 26.99 0.880 0.0247 46.39
w/o PC 26.93 0.877 0.0250 46.92
Baseline 26.08 0.877 0.0282 48.91
Ours 27.34 0.889 0.0240 45.44
TABLE IV: Ablation study of different modules on Paris StreetView. Here, random masks with mask ratio 30%-40% are used.

Iv-B Ablation study

To verify the contribution of each individual component of our network, the ablation study is performed on the Paris StreetView dataset. The result is shown in Table IV. We can see that each module shows its effectiveness. Specifically speaking, the partial convolution based padding (PC) module [17] improves the performance by making the model pay more attention to visible pixels (undamaged regions). Moreover, our designed BPFA module contributes to the whole network the best. This is because BPFA enhances the connection from local feature to overall consistency, thus reducing visible artifacts and unpleasant contents. In addition, we can find that the SDFF module has the second contribution. As expected, SDFF captures the correlation between the structure and texture features. Interestingly, the CA module can learn contextual feature representations and the SKNet module can effectively equalize the structure and texture features generated by the multi-scale filling stage, thereby benefiting the proposed network.

To better visulization, we show the outputs and ground-truths for Eqs. (1) and (2) in Fig. 7. Fig. 7 (a) and (c) are our structure and texture feature maps obtained after the multi-scale filling stage, respectively. Fig. 7 (b) and (d) are the structure and texture of ground-turth, respectively. We can see that deep-layer convolution focuses more on image structure (Fig. 7 (a) and (b)) and shallow-layer convolution focuses more on image texture (Fig. 7 (c) and (d)).

(a) Our structure (b) GT structure
(c) Our texture (d) GT texture
Fig. 7: Visual outputs for Eqs. (1) and (2).

In addition, according to our ablation study, our method with SDFF performs better than that without SDFF. This shows that our SDFF is a better channel attention module than existing channel attention modules. An example of the feature output is shown in Fig. 8. It can be observed Fig. 8 that these two soft gates can respectively response the structure and texture well. It is worth emphasizing that our method use and to make deep-layer and shallow-layer convolutions focus on structure and texture features, respectively. Meanwhile, and are used to improve perceptual quality and mitigate style differences, respectively. Clearly, and are necessary for pixel reconstruction and adversarial training. The ablation experiments on these losses are shown in Table V. It is clear that each loss contributes to our method.

(a) Structure (b) Texture (c) Ground Truth
Fig. 8: Visualization of the feature maps for the structure and texture information.
Metrics
w/o 26.99 0.879 0.0254 46.96
w/o 26.98 0.879 0.0250 47.55
w/o 27.16 0.883 0.0251 45.95
w/o 27.23 0.887 0.0252 43.96
Ours 27.34 0.889 0.0240 45.44
TABLE V: Ablation study of different losses on Paris StreetView. Here, random masks with mask ratio 30%-40% are used.

V Conclusion

In this paper, we have presented a deep multi-feature co-learning network for image inpainting, which can yield the detailed textures and reasonable structures. Our network designs two new fusion modules: Soft-gating Dual Feature Fusion (SDFF) and Bilateral Propagation Feature Aggregation (BPFA). SDFF can control the fusion ratio through a soft gating technique to refine the structure and texture features, making the structures and textures more consistent. BPFA can co-learn contextual attention, channel-wise information and feature space. By the co-learning strategy, the inpainted images preserve the from local to overall consistency.

Note that the soft gating operation is commonly used to channel attention. But our SDFF module exploits two soft gates ( and ) to make effective response to structure and texture information, respectively. Through and , our SDFF can control the degree of integrating structure and texture information. We also tried to first concatenate and , and then used channel attention module (SENet [8]) directly to output feature. However, this will lead to dynamic output changes and impair the training of GAN. Our experiment also shows that this strategy achieves only limited improvement for our model. Therefore, we propose to multiply reweighted by to selectively transmit useful features, resulting in significant performance improvement. In the future, we will explore the performance of introducing the structure prior in the encoding stage.

References

  • [1] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics (ToG), 28(3):1–11, 2009.
  • [2] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pages 417–424, 2000.
  • [3] S. Darabi, E. Shechtman, C. Barnes, D. B. Goldman, and P. Sen. Image melding: Combining inconsistent images using patch-based synthesis. ACM Transactions on Graphics (ToG), 31(4):1–10, 2012.
  • [4] C. Doersch, S. Singh, A. Gupta, J. Sivic, and A. Efros. What makes paris look like paris? ACM Transactions on Graphics (ToG), 31(4):1–9, 2012.
  • [5] A. A. Efros and W. T. Freeman. Image quilting for texture synthesis and transfer. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pages 341–346, 2001.
  • [6] J. Hays and A. A. Efros. Scene completion using millions of photographs. ACM Transactions on Graphics (ToG), 26(3):1–7, 2007.
  • [7] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 6629–6640, 2017.
  • [8] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 7132–7141, 2018.
  • [9] S. Iizuka, E. Simo-Serra, and H. Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (ToG), 36(4):1–14, 2017.
  • [10] J. Johnson, A. Alahi, and L. Fei-Fei.

    Perceptual losses for real-time style transfer and super-resolution.

    In Proceedings of the European Conference on Computer Vision (ECCV), pages 694–711, 2016.
  • [11] A. Lahiri, A. K. Jain, S. Agrawal, P. Mitra, and P. K. Biswas. Prior guided gan based semantic inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 13696–13705, 2020.
  • [12] J. Li, Z. Li, J. Cao, X. Song, and R. He. Faceinpainter: High fidelity face adaptation to heterogeneous domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5089–5098, 2021.
  • [13] J. Li, N. Wang, L. Zhang, B. Du, and D. Tao. Recurrent feature reasoning for image inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7760–7768, 2020.
  • [14] X. Li, W. Wang, X. Hu, and J. Yang. Selective kernel networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 510–519, 2019.
  • [15] Y. Li, S. Liu, J. Yang, and M.-H. Yang. Generative face completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3911–3919, 2017.
  • [16] L. Liao, J. Xiao, Z. Wang, C.-W. Lin, and S. Satoh. Image inpainting guided by coherence priors of semantics and textures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6539–6548, 2021.
  • [17] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 85–100, 2018.
  • [18] H. Liu, B. Jiang, Y. Song, W. Huang, and C. Yang. Rethinking image inpainting via a mutual encoder-decoder with feature equalizations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 725–741, 2020.
  • [19] H. Liu, B. Jiang, Y. Xiao, and C. Yang. Coherent semantic attention for image inpainting. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4170–4179, 2019.
  • [20] H. Liu, Z. Wan, W. Huang, Y. Song, X. Han, and J. Liao. Pd-gan: Probabilistic diverse gan for image inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9371–9381, 2021.
  • [21] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 3730–3738, 2015.
  • [22] K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi. Edgeconnect: Generative image inpainting with adversarial edge learning. In Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW), pages 3265–3274, 2019.
  • [23] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536–2544, 2016.
  • [24] J. Peng, D. Liu, S. Xu, and H. Li. Generating diverse structure for image inpainting with hierarchical vq-vae. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 10775–10784, 2021.
  • [25] Y. Ren, X. Yu, R. Zhang, T. H. Li, S. Liu, and G. Li. Structureflow: Image inpainting via structure-aware appearance flow. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 181–190, 2019.
  • [26] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 839–846, 1998.
  • [27] T. Wang, H. Ouyang, and Q. Chen. Image inpainting with external-internal learning and monochromic bottleneck. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5120–5129, 2021.
  • [28] X. Wang, R. Girshick, A. Gupta, and K. He.

    Non-local neural networks.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7794–7803, 2018.
  • [29] C. Xie, S. Liu, C. Li, M.-M. Cheng, W. Zuo, X. Liu, S. Wen, and E. Ding. Image inpainting with learnable bidirectional attention maps. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 8858–8867, 2019.
  • [30] W. Xiong, J. Yu, Z. Lin, J. Yang, X. Lu, C. Barnes, and J. Luo. Foreground-aware image inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5840–5848, 2019.
  • [31] L. Xu, Q. Yan, Y. Xia, and J. Jia. Structure extraction from texture via relative total variation. ACM Transactions on Graphics (ToG), 31(6):1–10, 2012.
  • [32] Z. Yan, X. Li, M. Li, W. Zuo, and S. Shan.

    Shift-net: Image inpainting via deep feature rearrangement.

    In Proceedings of the European Conference on Computer Vision (ECCV), pages 1–17, 2018.
  • [33] J. Yang, Z. Qi, and Y. Shi. Learning to incorporate structure knowledge for image inpainting. In

    Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)

    , pages 12605–12612, 2020.
  • [34] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5505–5514, 2018.
  • [35] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Free-form image inpainting with gated convolution. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4471–4480, 2019.
  • [36] T. Yu, Z. Guo, X. Jin, S. Wu, Z. Chen, W. Li, Z. Zhang, and S. Liu. Region normalization for image inpainting. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 12733–12740, 2020.
  • [37] Y. Zeng, Z. Lin, J. Yang, J. Zhang, E. Shechtman, and H. Lu. High-resolution image inpainting with iterative confidence feedback and guided upsampling. In Proceedings of the European Conference on Computer Vision (ECCV), pages 1–17, 2020.
  • [38] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba.

    Places: A 10 million image database for scene recognition.

    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 40(6):1452–1464, 2017.
  • [39] Y. Zhou, C. Barnes, E. Shechtman, and S. Amirghodsi. Transfill: Reference-guided image inpainting by merging multiple color and spatial transformations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2266–2276, 2021.