DeepAI
Log In Sign Up

Unbiased Multi-Modality Guidance for Image Inpainting

Image inpainting is an ill-posed problem to recover missing or damaged image content based on incomplete images with masks. Previous works usually predict the auxiliary structures (e.g., edges, segmentation and contours) to help fill visually realistic patches in a multi-stage fashion. However, imprecise auxiliary priors may yield biased inpainted results. Besides, it is time-consuming for some methods to be implemented by multiple stages of complex neural networks. To solve this issue, we develop an end-to-end multi-modality guided transformer network, including one inpainting branch and two auxiliary branches for semantic segmentation and edge textures. Within each transformer block, the proposed multi-scale spatial-aware attention module can learn the multi-modal structural features efficiently via auxiliary denormalization. Different from previous methods relying on direct guidance from biased priors, our method enriches semantically consistent context in an image based on discriminative interplay information from multiple modalities. Comprehensive experiments on several challenging image inpainting datasets show that our method achieves state-of-the-art performance to deal with various regular/irregular masks efficiently.

READ FULL TEXT VIEW PDF

page 4

page 10

page 11

05/09/2018

SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting

In this paper, we focus on image inpainting task, aiming at recovering t...
10/12/2022

ZITS++: Image Inpainting by Improving the Incremental Transformer on Structural Priors

The image inpainting task fills missing areas of a corrupted image. Desp...
06/14/2021

Context-Aware Image Inpainting with Learned Semantic Priors

Recent advances in image inpainting have shown impressive results for ge...
07/08/2018

Semi-parametric Image Inpainting

This paper introduces a semi-parametric approach to image inpainting for...
05/14/2022

SaiNet: Stereo aware inpainting behind objects with generative networks

In this work, we present an end-to-end network for stereo-consistent ima...
12/08/2021

Fully Context-Aware Image Inpainting with a Learned Semantic Pyramid

Restoring reasonable and realistic content for arbitrary missing regions...
12/15/2020

Image Inpainting Guided by Coherence Priors of Semantics and Textures

Existing inpainting methods have achieved promising performance in recov...

1 Introduction

Image inpainting aims to repair missing or damaged image content based on known information of an image. It has been applied on many real-world scenarios, such as image editing [1, 3], unwanted object removal [8, 30], and old photo restoration [31].

Following the assumption that corrupted images have adequate knowledge for inpainting [42, 19], modern image inpainting methods [27, 25, 39, 20, 19] employ an encoder-decoder architecture. Concretely, they focus on various contextual attention mechanisms to learn the known visible content and fill the missing region. However, this assumption does not hold if the image is damaged by larger masks. It is difficult to provide sufficient semantically consistent information for realistic image inpainting based on known area in a RGB image.

Figure 1: Architecture of our Multi-Modality guided Transformer that couples various modalities including RGB image, semantic segmentation, and edge textures.

Therefore, recent approaches [39, 25, 32, 21, 4] have made great efforts to introduce auxiliary priors, such as edges, segmentation, and contours, to facilitate improving image inpainting performance. However, they still suffer from the biased prior issue by using predicted auxiliary structures to guide image inpainting intermediately. Without ground-truth in testing phase, such direct guidance is inevitably biased, resulting in more deviations and errors for image inpainting. On the other hand, previous works [32, 23] are usually divided into multiple stages of neural networks under the U-Net architecture. If each stage contains a complex subnetwork, it is time-consuming for potential real-world inpainting applications. This problem becomes more prominent when extending to video inpainting. For example, Liu et al. [23] tackle the image inpainting problem by a two-stage process, i.e., two individual U-Nets for rough inpainting and refinement inpainting, yielding the running speed of only FPS.

To solve the above issues, we propose a new multi-modality guided transformer network for image inpainting. As shown in Fig. 1, it follows the U-Net style [28] encoder-decoder architecture. In the encoder, we first develop the adaptive contextual bottlenecks for better context reasoning. To adapt to the current image content and missing region, the gating mask is updated to weight different dilated convolutions to enhance base features. Then, the multi-modal mutual decoder is proposed to decode the enhanced features into three modalities, i.e., RGB image, and corresponding semantic segmentation and edge textures. It consists of one image inpainting branch and two auxiliary branches for semantic segmentation and edge textures. Unlike existing approaches based on direct guidance from predicted auxiliary structures, we focus on jointly learning the unbiased discriminative interplay information among the three branches. Specifically, the proposed multi-scale spatial-aware attention mechanism integrates multi-modal feature maps via auxiliary denormalization to reduce duplicated and noisy content for image inpainting. Supervised by ground-truth RGB images, semantic segmentation and edge maps, the whole network is trained in an end-to-end fashion efficiently. Note that segmentation and edge annotations can be provided by the off-the-shelf algorithms [6, 25].

As shown in Fig. 2, previous image inpainting methods fail to restore correct faces and buildings based on either biased edge [25] or segmentation [32] prior. On the contrary, our method still achieves robust results even though the glasses are not repaired in edge prior (see Ours in the 1st row of Fig. 2) or the roof shape is not predicted correctly in segmentation prior (see Ours in the 2nd row of Fig. 2). It demonstrates that our method can extract discriminative unbiased context information to guide image inpainting. To verify the effectiveness of our method, the experiment is conducted on three datasets including CelebA-HQ [17, 18], OST [35]

and CityScapes 

[7]. The results show our method achieves the state-of-the-art image inpainting performance. For example, our method obtains the best FID score on the CelebA-HQ dataset with both regular and irregular masks, yeilding gain over the second best performer CTSDG [12]. By using segmentation results from DeepLabv3+ [6], our method still performs well on those datasets without segmentation annotation (e.g., Places2 [50]).

Contributions. 1. We propose an end-to-end multi-modality guided transformer to learn interplay information from multiple modalities including RGB image, edge textures and semantic segmentation. 2. We develop the multi-scale spatial-aware attention mechanism with auxiliary denormalization to capture compact and discriminative multi-modal features to guide unbiased image inpainting. 3. Comprehensive results on several benchmarks demonstrate the effectiveness of our unbiased multi-modality guidance, especially for irregular masks.

[width=0.9]fig/unbiased.pdf GTEC [25]SPG [32]OursOurs

Figure 2: Influence of biased prior guidance.  means no edge prior for SPG [32] and segmentation prior for EC [25]. Ours denotes the variant of our multi-modality guided image inpainting method with inaccurate edge and segmentation priors by reducing the loss weights of two auxiliary branches by times.

2 Related Work

Image inpainting.

Mainstream image inpainting methods employ the encoder-decoder architecture based on the U-Net [28]. For example, Pathak et al. [27] introduces an adversarial network [11] to help train the U-Net and mitigate the blurring caused by the pixel-level averaging property of a reconstruction loss. After that, Contextual Attention (CA) [42] is a two-stage coarse-to-fine model to weight known region as the reference of mission region. Using partial conv [22], Recurrent Feature Reasoning (RFR) [19] applies multiple iterations at the bottleneck of the encoder from outside to inside for large corrupt areas. Different from partial conv [22]

with a heuristic mask update step to standard convolution, Gated Conv (GC) 

[43] improves this mask update process with a learnable convolution layer.

To better exploit context between missing and uncorrupted regions, Iizuka et al. [16] first introduces multiple residual modules [13] of dilation convolution [41] as the bottleneck in the encoder. However, it may bring the “gridding” problem [5, 34]

due to only sampling padded non-zero positions. That is, a single constant dilation rate results in either sparse convolution kernels (large hole rate) or difficulty crossing over large masks (small hole rate). To this end, Wang

et al. [36] develop a generative multi-column network for image inpainting. Recently, Zeng et al. [45] propose the AOT blocks to aggregate contextual transformations from varying receptive fields, which capture both informative distant image contexts and rich patterns of interest. Different from above methods, we introduce a new adaptive contextual bottleneck in the encoder, where the dynamic gating updating weights different pathways of dilated convolutions based on various masks.

Image inpainting with auxiliary structures.

Due to the ill-posed nature of reconstructing missing regions, additional structural priors (e.g., edges, segmentation, and contours) are used to facilitate image inpainting models for more realistic results. Edge Connect (EC) [25] relies on the corrupted canny edge image to deliver finer inpainting results. Cao and Fu [4] introduce an extra encoder to infer precise wireframe sketches to bypass the pool coherence of canny edge. According to the style and spatial consistency of semantic segmentation, Segmentation Prediction and Guidance network (SPG) [32] is a two-stage based segmentation and RGB image inpainting model, where DeepLabv3+ [6]

is used to estimate the segmentation of corrupted image. Another work 

[39] is a new three-stage based model to locate and fill foreground object and its contour by disentangling the inter-object intersection.

However, the above multi-stage methods are usually time-consuming. For better efficiency, the Semantic Guidance and Evaluation (SGE) network [20] couples with segmentation and image inpainting at different layers of decoder, where the segmentation after completing and confidence scoring guides image inpainting by semantic normalization [26]. Liao et al. [21] propose the Semantic-wise Attention Propagation (SWAP) module to capture the semantic relevance between segmentation and image textures in non-local operation. Recently, Yang et al. [40] predict explicit edge embedding with an attention mechanism to facilitate image inpainting by the multi-task learning strategy. It worth mentioning that most aforementioned works use estimated auxiliary structures as the direct guidance of image inpainting. On the contrary, we develop the multi-head spatial-aware attention module to guide image inpainting based on jointly learned discriminative features from unbiased auxiliary priors.

Transformers in image inpainting.

Inspired by Vision Transformer [10], recent methods [9, 44] decode the long-range dependencies between input features for better image inpainting. Deng et al. [9] learn relations between the corrupted and uncorrupted regions and exploit their respective internal closeness. Yu et al. [44] introduce the bidirectional autoregressive transformer that enables bidirectionally modeling of contextual information of missing regions. In contrast, our method propose a new multi-modality guided transformer to capture interplay information across three modalities.

3 Multi-Modality Guided Transformer

The original image is degraded as a corrupted image , where the pixel values in the missing region equal to are defined as invisible pixels. Our goal is to produce semantically reasonable and visually realistic reconstructed images with the input of the corrupted image . Similar to previous works [27, 16, 43, 19], we retain the U-Net style encoder-decoder architecture. As illustrated in Fig. 1, the multi-modality guided transformer contains an encoder with adaptive contextual bottlenecks, and a multi-modal mutual decoder with multi-scale spatial-aware attention, described in detail as follows.

3.1 Encoder with Adaptive Contextual Bottlenecks

For better context reasoning, the multi-stream structure is used in the encoder to weight dilated convolutions and encode the current image content and missing region. Unlike simply stacking parameters in previous ASPP [5] and AOT [45], we develop a stack of Adaptive Contextual Bottlenecks (ACB) to adapt to the specific mask shape size and image context by dynamic gating. As shown in Fig. 3, the ACB module consists of four parallel pathways of convolutional layers with different dilation rate and one gating mask to weight dilated convolutions. In this way, the encoder can enlarge the perceptual field of convolutions and find the most plausible pathway according to the current missing region.

Figure 3: Structure of Adaptive Contextual Bottlenecks in the encoder.

Given the corrupt image , the base features and gating are initialized by the last layer (gated conv) of encoder. Then and at each layer is updated by the ACB block. The gating mask

is used to estimate the probability of missing region based on the feature map at the

-th layer (), i.e., , where denotes the gated conv operation [43]. In terms of each pathway with dilation rate , we compute the dilated feature maps based on and corresponding weight . Similar to [38], the spatial-wise weight

is calculated based on both average and max pooling of concatenation of dilated feature maps

and gating masks , i.e., , where

is the sigmoid function, and

and are the average and maximal pooling respectively. denotes the fully-connected layer, and the gating mask for each pathway is calculated as . Finally, the feature map at the -th ACB layer is updated by the spatial-wise weighted summation of as

(1)

where denotes the set of different dilation rates. The fractional term denotes element-wise product between dilated feature map

and attention vector

, weighting dilation block based on mask and image context. For simplicity, we omit the subscript in the following sections.

3.2 Multi-modal Mutual Decoder

Given enhanced features , the decoder use stacks of transformer blocks to learn the structural multi-modal information jointly. It consists of three branches, i.e., one inpainting branch to recover the damaged image, and two auxiliary branches with additional segmentation and edge priors.

As shown in Fig. 1, within each transformer block, we first calculate the attention among feature maps from three branches by the proposed Multi-Scale Spatial-aware Attention (MSSA). Then, the enhanced features are split to combine the previous feature maps in each branch for attention calculation at next stage. Note that the skip connections between the encoder and decoder are used to prevent network degradation. After three stages, we predict the inpainted image , edge and segmentation maps. Thus we leverage the structural features from auxiliary branches to enforce the model focus on discriminative interplay features for more realistic image inpainting.

To learn mutual features from different modalities, it is intuitive to simply concatenate or add the feature maps in three branches. Nevertheless, such strategies may introduce duplicated and noisy content for image inpainting. To effectively integrate compact features from auxiliary branches, we introduce a new Multi-Scale Spatial-aware Attention (MSSA) mechanism as follows.

Figure 4: Illustration of Multi-Scale Spatial-aware Attention.

Multi-scale spatial-aware attention. Based on the encoded feature maps , we use to denote the input feature maps for the inpainting branch, edge branch, and segmentation branch, respectively. As illustrated in Fig. 4, we combine the feature maps from three branches by the following Auxiliary DeNormalization (ADN):

(2)

where denotes the matrix concatenation along channel dimension, and the element-wise multiplication. denotes layer normalization [2]. and are the affine transformation parameters learned by two convolutional layers based on (see the top-right corner of Fig. 4). In this way, the multi-modal features are merged based on context from auxiliary structures that varies with respect to different spatial location.

Then, the merged features are embedded into query , key and value . Similar to [46], the embedded feature map is spatially split into patches, i.e., , where denote the height, width and channel of patches respectively. The normalized self-attention between patches and can be calculated as . Note that we can perform multi-head self-attention like [10]. Thus the feature map of each patch is updated in a non-local form, i.e., .

Comparison between existing denormalization methods. Our ADN is related to two previous denormalization methods including AdaIN [15] and SPADE [26]. As shown in Fig. 5, we compare the networks of three denormalization methods. However, they are different in two aspects:

  • AdaIN [15] and SPADE [26] learn the affine transformation parameters based on the predicted auxiliary structures. Without ground-truth in testing phase, the predicted auxiliary structures are inevitably biased and result in inferior performance. In contrast, our ADN is based on the multi-modal features from two auxiliary branches.

  • AdaIN [15]

    leverages the image’s mean and variance instead of learnable affine parameters. SPADE 

    [26]

    learns the spatial style of features by two convolutions after Batch Normalization. However, we combine features from both inpainting and auxiliary branches to learn the affine parameters.

Figure 5: (a) Our Auxiliary DeNormalization (ADN). (b) SPatially-Adaptive DEnormalization (SPADE) [26]. (c) Adaptive Instance Normalization (AaIN) [15]. LN, BN and IN denote layer, batch and instance normalizations respectively.

Gated feed-forward. Finally, we piece all feature maps together and reshape them with the original scale of input inpainting features . Following the gated feed-forward layer, we can output the final feature maps for inpainted image prediction. Similar to gated conv [43], the gated feed-forward layer can ease the color discrepancy problem by detecting potentially corrupted and uncorrupted regions.

3.3 Optimization

To train our network, the overall loss consists of three terms, i.e.,

(3)

where , and denote the loss terms for inpainting branch, edge branch and segmentation branch respectively. and are the balancing factors. The inpainting loss follows the work in [22]. Similar to [25]

, we use both binary cross-entropy and adversarial loss functions to train the edge branch,

i.e.,

(4)

where is the balancing weight. predicts the edge structure, and justifies if the predicted edge is fake or real. is the probability map between and for the reconstructed edge while is the ground-truth edge based on the canny operator [25]. denotes the spectral normalization discriminator [24] that is composed of five convolutional layers. For the segmentation branch, we use the cross-entropy loss denoted by , where and denote the ground-truth category and predicted probability for pixel .

4 Experiment

We compare our method with state-of-the-arts on three large-scale datasets. An extensive ablation study is conducted to investigate the important designs in our model. All experiments are conducted on two 24G TITAN RTX GPUs.

Datasets.

CelebA-HQ dataset [17, 18] is a large-scale face image dataset with HD face images, where each image has a semantic segmentation mask corresponding to facial categories. Outdoor dataset (OST) [35] includes training images and testing images for semantic categories, which are obtained from the outdoor scene photography collection. Cityscapes dataset [7] contains street view images belonging to categories. We expand the number of training images in this dataset, i.e., images from the training set and images from the test set are used for training, and images from the validation set are used for testing. In addition, the Places2 dataset [50] contains million images covering more than different types of scenes. We generate both regular and irregular masks to verify the ability of image inpainting methods. For regular masks, we draw a centered square mask for CelebA-HQ and OST, and a centered square mask for Cityscape. For irregular masks, we settle masks from [19] for CelebA-HQ and masks from [22] for Cityscape and OST.

mask type irregular regular
easy medium hard
PSNR GC [43] 29.30 25.72 23.77 25.75
RFR [19] 29.22 26.12 24.31 24.85
CMGAN [48] 29.06 25.79 23.90 24.33
ICT [33] 28.07 24.56 22.70 24.51
CTSDG [12] 29.59 26.59 24.69 26.56
Ours 29.94 26.88 25.12 26.70
SSIM GC [43] 0.96 0.93 0.90 0.90
RFR [19] 0.96 0.94 0.91 0.87
CMGAN [48] 0.97 0.94 0.91 0.87
ICT [33] 0.96 0.92 0.89 0.87
CTSDG [12] 0.97 0.94 0.92 0.91
Ours 0.97 0.95 0.93 0.92
FID GC [43] 15.00 18.41 21.28 22.45
RFR [19] 7.37 10.74 13.45 14.35
CMGAN [48] 6.80 11.85 14.12 12.91
ICT [33] 6.54 11.80 15.93 11.90
CTSDG [12] 7.80 10.14 13.30 14.52
Ours 6.47 9.32 11.61 11.40
Table 1: Quantitative comparison with the state-of-the-art approaches on CelebA-HQ. Easy, medium, and hard irregular masks denote the mask with coverage ratio of , , and , respectively. higher is better, and lower is better. Best and second best results are highlighted and underlined.

Similar to the previous works [20, 45]

, we use three metrics as follows. Peak Signal to Noise Ratio (PSNR) is an objective evaluation metric to assess the quality of generate images. Structural Similarity Index (SSIM) 

[37]

uses the mean as an estimate of luminance, standard deviation as an estimate of contrast, and covariance as a measure of structural similarity to compare the difference between the generated and original images. Frechet Inception Distance (FID) 

[14] evaluates the accuracy and diversity of generated images. Notably, the Inception network [29] is used to extract the image features when calculating the FID score, and then calculate its mean and covariance matrix to estimate the distance between the ground-truth and generated data distribution. According to [47], deep metrics like FID are close to human perception.

[width=0.9]fig/QualitativeStudy.pdf MaskedGCCMGANICTCTSDGOursGT

Figure 6: Qualitative results of existing methods on CelebA-HQ.

[width=]fig/MM_OST_Cityscape.pdf

Figure 7: Qualitative results of our method on Cityscape (1 to 4 columns) and OST (5 to 8 columns).

4.1 Implementation Details

Our model is supervised by auxiliary structures including edge textures and semantic segmentation. With regard to edge structure, we employ the canny detection method [25] to generate edges of images. Besides, the CelebA-HQ, CityScapes and OST datasets all contain hand-crafted semantic segmentation, hence we can easily adopt these official labels for the segmentation part. More details of implementation are shown in the supplementary.

4.2 Result Analysis

We compare our model with several state-of-the-art methods including GC [43], RFR [19], CMGAN [48], ICT [33], CTSDG [12], SPG [32], SGINet [1], SGE [20], and SWAP [21]. A quantitative comparison is carried out on three datasets in terms of both regular and irregular masks with different coverage ratios. Full comparison results [45, 23, 49, 25] we put in the appendix.

From Table 1, our method achieves the best or comparable performance among state-of-the-art image inpainting approaches that may not adopt auxiliary priors. Our method produces much better FID score than others for both regular and irregular masks, indicating that our inpainted results are more realistic. In Table 2, we compare several auxiliary prior guided inpainting approaches [25, 32, 20, 21]. For a fair comparison with the methods relying on only one auxiliary structure, we construct two variants, denoted by “Ours w/o seg.” and “Ours w/o edge”. Compared with existing methods, our method achieves considerable gain respective to PSNR and FID especially on irregular masks. This is because our method focuses on the interplay representation from three modalities rather than directly guiding the image inpainting branch by predicted auxiliary structures (see Table 4).

In addition, we provide some visual examples on the CelebA-HQ dataset in Fig. 6. It can be seen that our method can generate more semantically consistent results compared with other approaches. More learned auxiliary priors of our method from CityScapes and OST datasets are visualized in Fig. 7.

method auxiliary prior OST CityScapes
regular irregular regular irregular
PSNR SSIM FID PSNR SSIM FID PSNR SSIM FID PSNR SSIM FID
EC [25] edge 19.32 0.76 41.25 19.12 0.74 42.27 21.71 0.76 19.87 17.63 0.72 39.04
SPG [32] seg. 18.04 0.70 45.31 17.85 0.74 50.03 20.14 0.71 23.21 16.41 0.67 43.63
SGINet [1] seg. - - - - - - 25.74 0.87 23.02 18.53 0.77 57.53
SGE [20] seg. 20.53 0.81 40.67 19.46 0.76 39.14 23.41 0.85 18.67 17.78 0.74 41.45
SWAP [21] edge, seg. 21.18 0.81 38.15 20.31 0.80 36.74 23.89 0.84 18.14 17.86 0.76 38.18
Ours w/o seg. edge 20.91 0.76 41.85 21.48 0.80 39.00 25.10 0.86 19.33 19.17 0.78 37.50
Ours w/o edge seg. 21.80 0.77 40.96 22.58 0.81 36.03 25.95 0.87 17.85 20.49 0.79 34.79
Ours edge, seg. 21.84 0.77 40.15 23.15 0.82 35.77 26.13 0.88 17.52 20.43 0.79 33.45
Table 2: Quantitative comparison with previous auxiliary prior guided approaches on OST and Cityscapes datasets.
Figure 8: Visual comparisons on Places2. From left to right: input, GC [43], EC [25], our method, and Ground Truth.
Figure 9: Segmentation results with different bottlenecks on CelebA-HQ dataset with regular center masks.

Additional results on Places2.

Similar to SGE [20] and SWAP [21], we also conduct additional experiment on the Places2 dataset [50] for a comprehensive evaluation. Since there is no ground-truth segmentation, we use the segmentation results by DeepLabv3+ [6] to supervise the segmentation branch in our model. As shown in Fig. 9, the visual results show that our method still generate realistic inpainted images without ground-truth segmentation labels.

4.3 Ablation Study

To verify the effectiveness of the proposed modules in our network, the ablation experiments are carried out on the CelebA-HQ dataset.

edge branch segmentation branch PSNR SSIM FID
25.88 0.90 12.36
26.47 0.91 11.42
26.19 0.90 11.95
26.70 0.92 11.40
Table 3: Contribution of two auxiliary branches in our method.

Contribution of auxiliary branches.

In Table 3, we construct three variants to verify the contribution of two auxiliary branches in our method. By learning from two auxiliary modalities, our method considerably outperforms the non-auxiliary variant w.r.t PSNR, SSIM, and FID. In addition, semantic segmentation contributes slightly more to image inpainting than edge textures. In summary, our Multi-Modal Mutual Decoder enriches semantic content on the inpainting branch by cross-attending segmentation and edge structures.

variant biased prior attention mechanism PSNR SSIM FID
MMT-1 concat 26.17 0.89 20.01
MMT-2 AdaIN [15] 26.17 0.89 21.71
MMT-3 SPADE [26] 26.29 0.90 14.60
MMT-4 MSSA+ADN 26.24 0.91 12.59
MMT-5 MSSA+add 26.37 0.91 12.64
MMT-6 MSSA+conv 26.50 0.91 11.90
MMT-7 MSSA+AdaIN [15] 26.36 0.91 12.81
MMT-8 MSSA+SPADE [26] 26.42 0.91 12.17
MMT-9 MSSA+ADN 26.70 0.92 11.40
Table 4: Comparison with different attention mechanisms.

Biased prior guidance.

Different from previous works [39, 32, 25, 20, 21] relying on biased prior guidance from predicted auxiliary structures, we jointly learn the interplay information of multi-modal features across the three branches and guide image inpainting based on ADN. To demonstrate its effectiveness, we construct four variants that are directly guided by predicted auxiliary structures. In practice, we first add one convolutional layer at different stages to predict the auxiliary structures (Fig. 1), and then combine multi-modal features (Fig. 4).

In Table 4, MMT-1 denotes concatenating predicted structures with feature maps in the inpainting branch. MMT-2, MMT-3 and MMT-4 denote that we use AdaIN [15], SPADE [26], and MSSA with ADN to calculate the affine transformation parameters based on predicted structures, respectively. Compared with our method without biased prior guidance (i.e., MMT-9), the FID score is significantly reduced based on predicted auxiliary structures. The results support our statement that predicted structures may introduce additional noises in image inpainting intermediately without ground-truth.

Effectiveness of multi-scale spatial-aware attention.

To verify the effectiveness of Multi-Scale Spatial-aware Attention (MSSA), we construct four baseline feature fusion strategies from MMT-5 to MMT-8 in Table 4. MMT-5 means that we directly perform element-wise summation on features from three branches, while MMT-6 means that we splice the features from three branches together and then fuse them by two convolutional layers.

From Table 4, our MSSA performs the best in terms of three metrics. Compared with simple addition or convolution, our MSSA can provide reliable cross-attention among multiple modalities to guide high-quality reconstructed images. We also replace ADN by AdaIN [15] and SPADE [26] in MSSA for MMT-7 and MMT-8 respectively. The results show that our ADN performs better than previous normalization methods, demonstrating its effectiveness.

Effectiveness of adaptive contextual bottlenecks.

In Table 6, we compare our Adaptive Contextual Bottlenecks (ACB) with the vanilla ResNet block [13] and the recently proposed AOT [45]. ACB@ () denotes layers of ACB modules; RES@8 and AOT@8 denote ResNet blocks [13] or AOT blocks [45] respectively. means quadrupling the channels of feature maps in ResNet blocks or copying base feature maps for different pathways in AOT blocks. The results show that the performance of ACB is improved along with the number of blocks is increased from to . Using ResNet or AOT blocks achieves similarly as that using ACB blocks. It is worth mentioning that ResNet and AOT blocks have less number of channels of feature maps in each pathway. For a fair comparison, we construct two variants RES@8 and AOT@8 with the same channels as our ACB blocks. However, more channels in feature maps do not help improve the performance by using ResNet or AOT blocks. We speculate that the gating updating scheme in our ACB can reduce the influence of redundant noisy context with more channels of feature maps.

Besides, the mean of category-wise intersection-over-union (mIoU) [6] is another metric to validate the influence of bottleneck modules on segmentation inpainting. Our ACB module () still outperforms other two blocks by more than . The segmentation results in Fig. 9 also show that our ACB module generates more accurate segmentation performance. If the number of bottlenecks are increased, some isolated errors in segmentation can be removed (see the 3rd and 4th columns in Fig. 9).

bottleneck PSNR SSIM FID mIoU%
RES@8 26.48 0.91 12.54 61.93
RES@8 26.23 0.91 13.26 60.11
AOT@8 26.51 0.91 11.61 63.68
AOT@8 26.29 0.91 14.17 62.28
ACB@2 26.48 0.91 12.18 63.54
ACB@4 26.60 0.91 12.24 65.84
ACB@6 26.61 0.91 12.09 66.16
ACB@8 26.70 0.92 11.40 67.13
Table 6: Efficiency of image inpainting networks.
method params (M) MACs (G) speed (FPS)
SPG [32] 119.64 58.68 2.03
EC [25] 27.06 122.67 67.21
CTSDG [12] 52.15 17.67 36.99
RFR [19] 31.22 206.12 15.56
CSA [23] 132.11 55.23 1.37
RES@8 [13] 22.76 96.10 40.82
AOT@8 [45] 27.48 100.93 30.96
Ours (ACB@2) 22.76 96.10 40.88
Ours (ACB@8) 51.09 125.11 29.49
Table 5: Comparison between different bottlenecks.

Efficiency comparison.

From Table 6, we compare the number of parameters, computational complexity (MACs), and the running speed (FPS) of existing methods. Two-stage based SPG [32] and CSA [23], composed of complex sub-networks at each stage, run much more slowly than end-to-end methods. In contrast, EC [25] consists of two simple sub-networks for edge prediction and image inpainting, resulting in fast running speed but inferior performance. RFR [19] is an end-to-end model but predicts the inpainted results by the decoding heads recurrently. In terms of bottlenecks in the encoder, our ACB@2 achieves similar performance as AOT@8 with faster speed. By using blocks, our method is still efficient with state-of-the-art performance among end-to-end methods.

Limitation discussion.

Although our model generates promising results in most cases, it fails to recognize and recover unseen semantic knowledge, hence produces strange artifacts in complex scenes with large masks. Note that this weakness also affects other methods. It indicates that image inpainting model requires not only generative but also recognition capability. For example, our method can synthesize the human silhouette but lacks precise semantic details.

5 Conclusion

In this paper, we propose an end-to-end Multi-modality Guided Transformer for image impainting, which enriches coupled spatial features from shared multi-modal representations (i.e., RGB image, semantic segmentation and edge textures). The proposed Multi-Scale Spatial-aware Attention can integrate compact discriminative features from multiple modalities via Auxiliary DeNormalization. Meanwhile, we introduce the Adaptive Contextual Bottlenecks in the encoder to enhance context reasoning for more semantically consistent inpainted results for the missing region. To the best of our knowledge, our scientific value lies in first analyzing the biased prior problem in image inpainting.

Acknowledgements and Declaration of Conflicting Interests.

This work was supported by the Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-JSC038. Libo Zhang was supported Youth Innovation Promotion Association, CAS (2020111). Dr. Du and his employer received no financial support for the research, authorship, and/or publication of this article.

References

  • [1] P. Ardino, Y. Liu, E. Ricci, B. Lepri, and M. D. Nadai (2020) Semantic-guided inpainting network for complex urban scenes manipulation. In ICPR, pp. 9280–9287. Cited by: §1, §4.2, Table 2.
  • [2] L. J. Ba, J. R. Kiros, and G. E. Hinton (2016) Layer normalization. CoRR abs/1607.06450. Cited by: §3.2.
  • [3] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman (2009) PatchMatch: a randomized correspondence algorithm for structural image editing. TOG, pp. 24. Cited by: §1.
  • [4] C. Cao and Y. Fu (2021)

    Learning a sketch tensor space for image inpainting of man-made scenes

    .
    In ICCV, Cited by: §1, §2.
  • [5] L. Chen, G. Papandreou, F. Schroff, and H. Adam (2017) Rethinking atrous convolution for semantic image segmentation. CoRR abs/1706.05587. Cited by: §2, §3.1.
  • [6] L. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, pp. 833–851. Cited by: §1, §1, §2, §4.2, §4.3.
  • [7] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016)

    The cityscapes dataset for semantic urban scene understanding

    .
    In CVPR, pp. 3213–3223. Cited by: §1, §4.
  • [8] A. Criminisi, P. Pérez, and K. Toyama (2003) Object removal by exemplar-based inpainting. In CVPR, pp. 721–728. Cited by: §1.
  • [9] Y. Deng, S. Hui, S. Zhou, D. Meng, and J. Wang (2021) Learning contextual transformer network for image inpainting. In MM, pp. 2529–2538. Cited by: §2.
  • [10] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby (2021) An image is worth 16x16 words: transformers for image recognition at scale. In ICLR, Cited by: §2, §3.2.
  • [11] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio (2014) Generative adversarial nets. In NeurIPS, pp. 2672–2680. Cited by: §2.
  • [12] X. Guo, H. Yang, and D. Huang (2021) Image inpainting via conditional texture and structure dual generation. In ICCV, pp. 14114–14123. Cited by: §1, §4.2, Table 1, Table 6.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §2, §4.3, Table 6.
  • [14] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS, pp. 6626–6637. Cited by: §4.
  • [15] X. Huang and S. J. Belongie (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, pp. 1510–1519. Cited by: Figure 5, 1st item, 2nd item, §3.2, §4.3, §4.3, Table 4.
  • [16] S. Iizuka, E. Simo-Serra, and H. Ishikawa (2017) Globally and locally consistent image completion. TOG, pp. 107:1–107:14. Cited by: §2, §3.
  • [17] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2018) Progressive growing of gans for improved quality, stability, and variation. In ICLR, Cited by: §1, §4.
  • [18] C. Lee, Z. Liu, L. Wu, and P. Luo (2020) MaskGAN: towards diverse and interactive facial image manipulation. In CVPR, pp. 5548–5557. Cited by: §1, §4.
  • [19] J. Li, N. Wang, L. Zhang, B. Du, and D. Tao (2020) Recurrent feature reasoning for image inpainting. In CVPR, pp. 7757–7765. Cited by: §1, §2, §3, §4.2, §4.3, Table 1, Table 6, §4.
  • [20] L. Liao, J. Xiao, Z. Wang, C. Lin, and S. Satoh (2020) Guidance and evaluation: semantic-aware image inpainting for mixed scenes. In ECCV, pp. 683–700. Cited by: §1, §2, §4.2, §4.2, §4.2, §4.3, Table 2, §4.
  • [21] L. Liao, J. Xiao, Z. Wang, C. Lin, and S. Satoh (2021) Image inpainting guided by coherence priors of semantics and textures. In CVPR, pp. 6539–6548. Cited by: §1, §2, §4.2, §4.2, §4.2, §4.3, Table 2.
  • [22] G. Liu, F. A. Reda, K. J. Shih, T. Wang, A. Tao, and B. Catanzaro (2018) Image inpainting for irregular holes using partial convolutions. In ECCV, pp. 89–105. Cited by: §2, §3.3, §4.
  • [23] H. Liu, B. Jiang, Y. Xiao, and C. Yang (2019) Coherent semantic attention for image inpainting. In ICCV, pp. 4169–4178. Cited by: §1, §4.2, §4.3, Table 6.
  • [24] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018)

    Spectral normalization for generative adversarial networks

    .
    In ICLR, Cited by: §3.3.
  • [25] K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi, and M. Ebrahimi (2019) EdgeConnect: structure guided image inpainting using edge prediction. In ICCVW, pp. 3265–3274. Cited by: Figure 2, Figure 2, §1, §1, §1, §1, §2, §3.3, Figure 9, §4.1, §4.2, §4.2, §4.3, §4.3, Table 2, Table 6.
  • [26] T. Park, M. Liu, T. Wang, and J. Zhu (2019) Semantic image synthesis with spatially-adaptive normalization. In CVPR, pp. 2337–2346. Cited by: §2, Figure 5, 1st item, 2nd item, §3.2, §4.3, §4.3, Table 4.
  • [27] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros (2016) Context encoders: feature learning by inpainting. In CVPR, pp. 2536–2544. Cited by: §1, §2, §3.
  • [28] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In MICCAI, pp. 234–241. Cited by: §1, §2.
  • [29] T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training gans. In NeurIPS, pp. 2226–2234. Cited by: §4.
  • [30] R. Shetty, M. Fritz, and B. Schiele (2018) Adversarial scene editing: automatic object removal from weak supervision. In NeurIPS, pp. 7717–7727. Cited by: §1.
  • [31] L. Song, J. Cao, L. Song, Y. Hu, and R. He (2019) Geometry-aware face completion and editing. In AAAI, pp. 2506–2513. Cited by: §1.
  • [32] Y. Song, C. Yang, Y. Shen, P. Wang, Q. Huang, and C.-C. J. Kuo (2018) SPG-net: segmentation prediction and guidance network for image inpainting. In BMVC, pp. 97. Cited by: Figure 2, Figure 2, §1, §1, §2, §4.2, §4.2, §4.3, §4.3, Table 2, Table 6.
  • [33] Z. Wan, J. Zhang, D. Chen, and J. Liao (2021) High-fidelity pluralistic image completion with transformers. In ICCV, pp. 4672–4681. Cited by: §4.2, Table 1.
  • [34] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. W. Cottrell (2018) Understanding convolution for semantic segmentation. In WACV, pp. 1451–1460. Cited by: §2.
  • [35] X. Wang, K. Yu, C. Dong, and C. C. Loy (2018)

    Recovering realistic texture in image super-resolution by deep spatial feature transform

    .
    In CVPR, pp. 606–615. Cited by: §1, §4.
  • [36] Y. Wang, X. Tao, X. Qi, X. Shen, and J. Jia (2018)

    Image inpainting via generative multi-column convolutional neural networks

    .
    In NeurIPS, pp. 329–338. Cited by: §2.
  • [37] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. TIP, pp. 600–612. Cited by: §4.
  • [38] S. Woo, J. Park, J. Lee, and I. S. Kweon (2018) CBAM: convolutional block attention module. In ECCV, Vol. 11211, pp. 3–19. Cited by: §3.1.
  • [39] W. Xiong, J. Yu, Z. Lin, J. Yang, X. Lu, C. Barnes, and J. Luo (2019) Foreground-aware image inpainting. In CVPR, pp. 5840–5848. Cited by: §1, §1, §2, §4.3.
  • [40] J. Yang, Z. Qi, and Y. Shi (2020) Learning to incorporate structure knowledge for image inpainting. In AAAI, pp. 12605–12612. Cited by: §2.
  • [41] F. Yu, V. Koltun, and T. A. Funkhouser (2017) Dilated residual networks. In CVPR, pp. 636–644. Cited by: §2.
  • [42] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang (2018) Generative image inpainting with contextual attention. In CVPR, pp. 5505–5514. Cited by: §1, §2.
  • [43] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang (2019) Free-form image inpainting with gated convolution. In ICCV, pp. 4470–4479. Cited by: §2, §3.1, §3.2, §3, Figure 9, §4.2, Table 1.
  • [44] Y. Yu, F. Zhan, R. Wu, J. Pan, K. Cui, S. Lu, F. Ma, X. Xie, and C. Miao (2021) Diverse image inpainting with bidirectional and autoregressive transformers. In MM, pp. 69–78. Cited by: §2.
  • [45] Y. Zeng, J. Fu, H. Chao, and B. Guo (2021) Aggregated contextual transformations for high-resolution image inpainting. CoRR abs/2104.01431. Cited by: §2, §3.1, §4.2, §4.3, Table 6, §4.
  • [46] Y. Zeng, J. Fu, and H. Chao (2020) Learning joint spatial-temporal transformations for video inpainting. In ECCV, pp. 528–543. Cited by: §3.2.
  • [47] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018)

    The unreasonable effectiveness of deep features as a perceptual metric

    .
    In CVPR, pp. 586–595. Cited by: §4.
  • [48] S. Zhao, J. Cui, Y. Sheng, Y. Dong, X. Liang, E. I. Chang, and Y. Xu (2021) Large scale image completion via co-modulated generative adversarial networks. In ICLR, Cited by: §4.2, Table 1.
  • [49] C. Zheng, T. Cham, and J. Cai (2019) Pluralistic image completion. In CVPR, pp. 1438–1447. Cited by: §4.2.
  • [50] B. Zhou, À. Lapedriza, A. Khosla, A. Oliva, and A. Torralba (2018) Places: A 10 million image database for scene recognition. TPAMI, pp. 1452–1464. Cited by: §1, §4.2, §4.