Bringing Old Photos Back to Life

by   Ziyu Wan, et al.

We propose to restore old photos that suffer from severe degradation through a deep learning approach. Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize. Therefore, we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces. And the translation between these two latent spaces is learned with synthetic paired data. This translation generalizes well to real photos because the domain gap is closed in the compact latent space. Besides, to address multiple degradations mixed in one old photo, we design a global branch with a partial nonlocal block targeting to the structured defects, such as scratches and dust spots, and a local branch targeting to the unstructured defects, such as noises and blurriness. Two branches are fused in the latent space, leading to improved capability to restore old photos from multiple defects. The proposed method outperforms state-of-the-art methods in terms of visual quality for old photos restoration.



page 1

page 7

page 8


Old Photo Restoration via Deep Latent Space Translation

We propose to restore old photos that suffer from severe degradation thr...

Pik-Fix: Restoring and Colorizing Old Photos

Restoring and inpainting the visual memories that are present, but often...

ROMNet: Renovate the Old Memories

Renovating the memories in old photos is an intriguing research topic in...

Time-Travel Rephotography

Many historical people are captured only in old, faded, black and white ...

Bringing Old Films Back to Life

We present a learning-based framework, recurrent transformer network (RT...

Wavelet-Based Dual-Branch Network for Image Demoireing

When smartphone cameras are used to take photos of digital screens, usua...

Background Matting: The World is Your Green Screen

We propose a method for creating a matte – the per-pixel foreground colo...

Code Repositories


Implementation of the CVPR2020 paper "Old Photo Restoration via Deep Latent Space Translation" in TensorFlow v2+Keras.

view repo


Image Restoration using ML

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Photos are taken to freeze the happy moments that otherwise gone. Even though time goes by, one can still evoke memories of the past by viewing them. Nonetheless, old photo prints deteriorate when kept in poor environmental condition, which causes the valuable photo content permanently damaged. Fortunately, as mobile cameras and scanners become more accessible, people can now digitalize the photos and invite a skilled specialist for restoration. However, manual retouching is usually laborious and time-consuming, which leaves piles of old photos impossible to get restored. Hence, it is appealing to design automatic algorithms that can instantly repair old photos for those who wish to bring old photos back to life.

Prior to the deep learning era, there are some attempts [stanco2003towards, bruni2004generalized, chang2005photo, giakoumis2005digital] that restore photos by automatically detecting the localized defects such as scratches and blemishes, and filling in the damaged areas with inpainting techniques. Yet these methods focus on completing the missing content and none of them can repair the spatially-uniform defects such as film grain, sepia effect, color fading, etc., so the photos after restoration still appear outdated compared to modern photographic images. With the emergence of deep learning, one can address a variety of low-level image restoration problems [zhang2017learning, zhang2017beyond, dong2014learning, xu2014deep, ren2016single, zhang2019deep, he2018deep, chen2018gated]

by exploiting the powerful representation capability of convolutional neural networks,

i.e., learning the mapping for a specific task from a large amount of synthetic images.

The same framework, however, does not apply to old photo restoration. First, the degradation process of old photos is rather complex, and there exists no degradation model that can realistically render the old photo artifact. Therefore, the model learned from those synthetic data generalizes poorly on real photos. Second, old photos are plagued with a compound of degradations and inherently requires different strategies for repair: unstructured defects that are spatially homogeneous, e.g., film grain and color fading, should be restored by utilizing the pixels in the neighborhood, whereas the structured defects, e.g., scratches, dust spots, etc., should be repaired with a global image context.

To circumvent these issues, we formulate the old photo restoration as a triplet domain translation problem. Different from previous image translation methods [isola2017image], we leverage data from three domains (i.e., real old photos, synthetic images and the corresponding ground truth), and the translation is performed in latent space. Synthetic images and the real photos are first transformed to the same latent space with a shared variational autoencoder [kingma2013auto] (VAE). Meanwhile, another VAE is trained to project ground truth clean images into the corresponding latent space. The mapping between the two latent spaces is then learned with the synthetic image pairs, which restores the corrupted images to clean ones. The advantage of the latent restoration is that the learned latent restoration can generalize well to real photos because of the domain alignment within the first VAE. Besides, we differentiate the mixed degradation, and propose a partial nonlocal block that considers the long-range dependencies of latent features to specifically address the structured defects during the latent translation. In comparison with several leading restoration methods, we prove the effectiveness of our approach in restoring multiple degradations of real photos.

2 Related Work

Single degradation image restoration.    Existing image degradation can be roughly categorized into two groups: unstructured degration such as noise, blurriness, color fading, and low resolution, and structured degradation such as holes, scratches, and spots. For the former unstructured ones, traditional works often impose different image priors, including non-local self-similarity [buades2005non, mairal2009non, dabov2007image], sparsity [elad2006image, mairal2007sparse, yang2010image, xie2012image] and local smoothness [weiss2007makes, babacan2008total, li2009markov]. Recently, a lot of deep learning based methods have also been proposed for different image degradation, like image denoising [zhang2017learning, zhang2017beyond, zhang2018ffdnet, mao2016image, lefkimmiatis2018universal, liu2018non, zhang2019rnan]

, super-resolution 

[dong2014learning, kim2016accurate, ledig2017photo, wang2018esrgan, zhang2018residual], and deblurring [xu2014deep, sun2015learning, nah2017deep, kupyn2018deblurgan].

Compared to unstructured degradation, structured degradation is more challenging and often modeled as the “image painting” problem. Thanks to powerful semantic modeling ability, most existing best-performed inpainting methods are learning based. For example, liu2018image masked out the hole regions within the convolution operator and enforces the network focus on non-hole features only. To get better inpainting results, many other methods consider both local patch statistics and global structures. Specifically, yu2018generative and liu2019coherent

proposed to employ an attention layer to utilize the remote context. And the appearance flow is explicitly estimated in

ren2019structureflow so that textures in the hole regions can be directly synthesized based on the corresponding patches.

No matter for unstructured or structured degradation, though the above learning-based methods can achieve remarkable results, they are all trained on the synthetic data. Therefore, their performance on the real dataset highly relies on synthetic data quality. For real old images, since they are often seriously degraded by a mixture of unknown degradation, the underlying degradation process is much more difficult to be accurately characterized. In other words, the network trained on synthetic data only, will suffer from the domain gap problem and perform badly on real old photos. In this paper, we model real old photo restoration as a new triplet domain translation problem and some new techniques are adopted to minimize the domain gap.

Mixed degradation image restoration.    In the real world, a corrupted image may suffer from complicated defects mixed with scratches, loss of resolution, color fading, and film noises. However, research solving mixed degradation is much less explored. The pioneer work [yu2018crafting] proposed a toolbox that comprises multiple light-weight networks, and each of them responsible for a specific degradation. Then they learn a controller that dynamically selects the operator from the toolbox. Inspired by [yu2018crafting], [suganuma2018attention]

performs different convolutional operations in parallel and uses the attention mechanism to select the most suitable combination of operations. However, these methods still rely on supervised learning from synthetic data and hence cannot generalize to real photos. Besides, they only focus on unstructured defects and do not support structured defects like image inpainting. On the other hand,

ulyanov2018deep found that the deep neural network inherently resonates with low-level image statistics and thereby can be utilized as an image prior for blind image restoration without external training data. This method has the potential, though not claimed in [ulyanov2018deep], to restore in-the-wild images corrupted by mixed factors. In comparison, our approach excels in both restoration performance and efficiency.

Old photo restoration.   Old photo restoration is a classical mixed degradation problem, but most existing methods  [stanco2003towards, bruni2004generalized, chang2005photo, giakoumis2005digital] focus on inpainting only. They follow a similar paradigm i.e., defects like scratches and blotches are first identified according to low-level features and then inpainted by borrowing the textures from the vicinity. However, the hand-crafted models and low-level features they used are difficult to detect and fix such defects well. Moreover, none of these methods consider restoring some unstructured defects such as color fading or low resolution together with inpainting. Thus photos still appear old fashioned after restoration. In this work, we reinvestigate this problem by virtue of a data-driven approach, which can restore images from multiple defects simultaneously and turn heavily-damaged old photos to modern style.

3 Method

In contrast to conventional image restoration tasks, old photo restoration is more challenging. First, old photos contain far more complex degradation that is hard to be modeled realistically and there always exists a domain gap between synthetic and real photos. As such, the network usually cannot generalize well to real photos by purely learning from synthetic data. Second, the defects of old photos is a compound of multiple degradations, thus essentially requiring different strategies for restoration. Unstructured defects such as film noise, blurriness and color fading, etc. can be restored with spatially homogeneous filters by making use of surrounding pixels within the local patch; structured defects such as scratches and blotches, on the other hand, should be inpainted by considering the global context to ensure the structural consistency. In the following, we propose solutions to address the aforementioned generalization issue and mixed degradation issue respectively.

3.1 Restoration via latent space translation

In order to mitigate the domain gap, we formulate the old photo restoration as an image translation problem, where we treat clean images and old photos as images from distinct domains and we wish to learn the mapping in between. However, as opposed to general image translation methods that bridge two different domains [isola2017image, CycleGAN], we translate images across three domains: the real photo domain , the synthetic domain where images suffer from artificial degradation, and the corresponding ground truth domain that comprises images without degradation. Such triplet domain translation is crucial in our task as it leverages the unlabeled real photos as well as a large amount of synthetic data associated with ground truth.


Figure 2: Illustration of our translation method with three domains.

We denote images from three domains respectively with , and , where and are paired by data synthesizing, i.e., is degraded from . Directly learning the mapping from real photos to clean images is hard since they are not paired and thus unsuitable for supervised learning. We thereby propose to decompose the translation with two stages, which are illustrated in Figure 2. First, we propose to map , , to corresponding latent spaces via , , and , respectively. In particular, because synthetic images and real old photos are both corrupted, sharing similar appearances, we align their latent space into the shared domain by enforcing some constraints. Therefore we have . This aligned latent space encodes features for all the corrupted images, either synthetic or real ones. Then we propose to learn the image restoration in the latent space. Specifically, by utilizing the synthetic data pairs , we learn the translation from the latent space of corrupted images, , to the latent space of ground truth, , through the mapping , where can be further reversed to through generator . By learning the latent space translation, real old photos can be restored by sequentially performing the mappings,


[scale=0.58]Figures/diagram6.pdf I.I.II.

Figure 3: Architecture of our restoration network. (I.) We first train two VAEs: VAE for images in real photos and synthetic images , with their domain gap closed by jointly training an adversarial discriminator; VAE is trained for clean images . With VAEs, images are transformed to compact latent space. (II.) Then, we learn the mapping that restores the corrupted images to clean ones in the latent space.

Domain alignment in the VAE latent space   One key of our method is to meet the assumption that and are encoded into the same latent space. To this end, we propose to utilize variational autoencoder [kingma2013auto] (VAE) to encode images with compact representation, whose domain gap is further examined by an adversarial discriminator [GAN]. We use the network architecture shown in Figure 3 to realize this concept.

In the first stage, two VAEs are learned for the latent representation. Old photos and synthetic images share the first one termed VAE, with the encoder and generator , while the ground true images are fed into the second one, VAE with the encoder-generator pair . VAE is shared for both and in the aim that images from both corrupted domains can be mapped to a shared latent space. The VAEs assumes Gaussian prior for the distribution of latent codes, so that images can be reconstructed by sampling from the latent space. We use the re-parameterization trick to enable differentiable stochastic sampling [KingmaW13] and optimize VAE with data and respectively. The objective with is defined as:


where, is the latent codes for , and is the generation outputs. The first term in equations is the KL-divergence that penalizes deviation of the latent distribution from the Gaussian prior. The second term lets the VAE reconstruct the inputs, implicitly enforcing latent codes to capture the major information of images. Besides, we introduce the least-square loss (LSGAN) [mao2017least], denoted as in the formula, to address the well-known over-smooth issue in VAEs, further encouraging VAE to reconstruct images with high realism. The objective with , denoted as , is defined similarly. And VAE for domain is trained with a similar loss so that the corresponding latent representation  can be derived.

We use VAE rather than vanilla autoencoder because VAE features denser latent representation due to the KL regularization (which will be proved in ablation study), and this helps produce closer latent space for and with VAE thus leading to smaller domain gap. To further narrow the domain gap in this reduced space, we propose to use an adversarial network to examine the residual latent gap. Concretely, we train another discriminator that differentiates and , whose loss is defined as,


Meanwhile, the encoder of VAE tries to fool the discriminator with a contradictory loss to ensure that and are mapped to the same space. Combined with the latent adversarial loss, the total objective function for VAE becomes,


Restoration through latent mapping   With the latent code captured by VAEs, in the second stage, we leverage the synthetic image pairs and propose to learn the image restoration by mapping their latent space (the mapping network in Figure 3). The benefit of latent restoration is threefold. First, as and are aligned into the same latent space, the mapping from to will also generalize well to restoring the images in . Second, the mapping in a compact low-dimensional latent space is in principle much easier to learn than in the high-dimensional image space. In addition, since the two VAEs are trained independently and the reconstruction of the two streams would not be interfered with each other. The generator can always get an absolutely clean image without degradation given the latent code mapped from , whereas degradations will likely remain if we learn the translation in pixel level.

Let , and be the final translation outputs for , and , respectively. At this stage, we solely train the parameters of the latent mapping network

and fix the two VAEs. The loss function

, which is imposed at both the latent space and the end of generator , consists of three terms,


where, the latent space loss, , penalizes the distance of the corresponding latent codes. We introduce the adversarial loss , still in the form of LSGAN [mao2017least], to encourage the ultimate translated synthetic image to look real. Besides, we introduce feature matching loss to stabilize the GAN training. Specifically, matches the multi-level activations of the adversarial network , and that of the pretrained VGG network (also known as perceptual loss in [isola2017image, johnson2016perceptual]), i.e.,


where () denotes the layer feature map of the discriminator (VGG network), and () indicates the number of activations in that layer.

3.2 Multiple degradation restoration

The latent restoration using the residual blocks, as described earlier, only concentrates on local features due to the limited receptive field of each layer. Nonetheless, the restoration of structured defects requires plausible inpainting, which has to consider long-range dependencies so as to ensure global structural consistency. Since legacy photos often contain mixed degradations, we have to design a restoration network that simultaneously supports the two mechanisms. Towards this goal, we propose to enhance the latent restoration network by incorporating a global branch as shown in Figure 3, which composes of a nonlocal block [wang2018non] that considers global context and several residual blocks in the following. While the original block proposed in [wang2018non] is unaware of the corruption area, our nonlocal block explicitly utilizes the mask input so that the pixels in the corrupted region will not be adopted for completing those area. Since the context considered is a part of the feature map, we refer to the module specifically designed for the latent inpainting as partial nonlocal block.

Formally, let be the intermediate feature map in (, and are number of channels, height and width respectively), and represents the binary mask downscaled to the same size, where represents the defect regions to be inpainted and represents the intact regions. The affinity between location and location in , denoted by , is calculated by the correlation of and modulated by the mask , i.e.,




gives the pairwise affinity with embedded Gaussian. and project to Gaussian space for affinity calculation. According to the affinity that considers the holes in the mask, the partial nonlocal finally outputs


which is a weighted average of correlated features for each position. We implement the embedding functions , , and with 11 convolutions.

We design the global branch specifically for inpainting and hope the non-hole regions are left untouched, so we fuse the global branch with the local branch under the guidance of the mask, i.e.,


where operator denotes Hadamard product, and and denote the nonlinear transformation of residual blocks in two branches. In this way, the two branches constitute the latent restoration network, which is capable to deal with multiple degradation in old photos. We will detail the derivation of the defect mask in Section 4.1.

4 Experiment

4.1 Implementation

Training Dataset   We synthesize old photos using images from the Pascal VOC dataset [everingham2015pascal]. In order to render realistic defects, we also collect scratch and paper textures, which are further augmented with elastic distortions. We use layer addition, lighten-only and screen modes with random level of opacity to blend the scratch textures over the real images from the dataset. To simulate large-area photo damage, we generate holes with feathering and random shape where the underneath paper texture is unveiled. Finally, film grain noises and blurring with random amount are introduced to simulate the unstructured defects. Besides, we collect 5,718 old photos to form the images old photo dataset.

Figure 4: ROC curve for scratch detection of different data settings.

Scratch detection   To detect structured area for the parital nonlocal block, We train another network with Unet architecture [ronneberger2015u]. The detection network is first trained using the synthetic images only. We adopt the focal loss [lin2017focal] to remedy the imbalance of positive and negative detections. To further improve the detection performance on real old photos, we annotate 783 collected old photos with scratches, among which we use 400 images to finetune the detection network. The ROC curves on the validation set in Figure 4 show the effectiveness of finetuning. The area under the curve (AUC) after finetuning reaches 0.91.

Training details   We adopt Adam solver [kingma2014adam] with and

. The learning rate is set to 0.0002 for the first 100 epochs, with linear decay to zero thereafter. During training, we randomly crop images to 256

256. In all the experiments, we empirically set the parameters in Equations (2) and (5) with , and respectively.

4.2 Comparisons

Baselines   We compare our method against state-of-the-art approaches. For fair comparison, we train all the methods with the same training dataset (Pascal VOC) and test them on the corrupted images synthesized from DIV2K dataset [Agustsson_2017_CVPR_Workshops] and the test set of our old photo dataset. The following methods are included for comparison.

  • [leftmargin=*]

  • Operation-wise attention [suganuma2018attention] performs multiple operations in parallel and uses an attention mechanism to select the proper branch for mixed degradation restoration. It learns from synthetic image pairs with supervised learning.

  • Deep image prior [ulyanov2018deep] learns the image restoration given a single degraded image, and has been proven powerful in denoising, super-resolution and blind inpainting.

  • Pix2Pix [pix2pixhd] is a supervised image translation method, which leverages synthetic image pairs to learn the translation in image level.

  • CycleGAN [CycleGAN] is a well-known unsupervised image translation method that learns the translation using unpaired images from distinct domains.

  • The last baseline is to sequentially perform BM3D [dabov2009bm3d], a classical denoising method, and EdgeConnect [nazeri2019edgeconnect], a state-of-the-art inpainting method, to restore the unstructured and structured defects respectively.

Figure 5: Qualitative comparison against state-of-the-art methods. It shows that our method can restore both unstructured and structured degradation and our recovered results are significantly better than other methods.
Input 12.92 0.49 0.59 306.80
Attention [suganuma2018attention] 24.12 0.70 0.33 208.11
DIP [ulyanov2018deep] 22.59 0.57 0.54 194.55
Pix2pix [pix2pixhd] 22.18 0.62 0.23 135.14
Sequential [dabov2009bm3d, nazeri2019edgeconnect] 22.71 0.60 0.49 191.98
Ours w/o PN 23.14 0.68 0.26 143.62
Ours w/ PN 23.33 0.69 0.25 134.35
Table 1: Quantitative results on the DIV2K dataset. Upward arrows indicate that a higher score denotes a good image quality. We highlight the best two scores for each measure. In the table, PN stands for partial nonlocal block.

Quantitative comparison   We test different models on the synthetic images from DIV2K dataset and adopt four metrics for comparison. Table 1

gives the quantitative results. The peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) are used to compare the low-level differences between the restored output and the ground truth. The operational-wise attention method unsurprisingly achieves the best PSNR/SSIM score since this method directly optimizes the pixel-level

loss. Our method ranks second-best in terms of PSNR/SSIM. However, these two metrics characterizing low-level discrepancy, usually do not correlate well with human judgment, especially for complex unknown distortions [zhang2018perceptual]. Therefore, we also adopt the recent learned perceptual image patch similarity (LPIPS) [zhang2018perceptual] metric which calculates the distance of multi-level activations of a pretrained network and is deemed to better correlate with human perception. This time, Pix2pix and our method give the best scores with a negligible difference. The operation-wise attention method, however, shows inferior performance under this metric, demonstrating it does not yield good perceptual quality. Besides, we adopt Fréchet Inception Distance (FID) [FID] which is widely used for evaluating the quality of generative models. Specifically, the FID score calculates the distance between the feature distributions of the final outputs and the real images. Still, our method and Pix2pix rank the best, while our method shows a slight quantitative advantage. In all, our method is comparable to the leading methods on synthetic data.

Qualitative comparison   To prove the generalization to real old photos, we conduct experiments on the real photo dataset. For a fair comparison, we retrain the CycleGAN to translate real photos to clean images. Since we lack the restoration ground truth for real photos, we cannot apply reference-based metrics for evaluation. Therefore, we qualitatively compare the results, which are shown in Figure 5. The DIP method can restore mixed degradations to some extent. However, there is a tradeoff between the defect restoration and the structural preservation: more defects reveal after a long training time while fewer iterations induce the loss of fine structures. CycleGAN, learned from unpaired images, tends to focus on restoring unstructured defects and neglect to restore all the scratch regions. Both the operation-wise attention method and the sequential operations give comparable visual quality. However, they cannot amend the defects that are not covered in the synthetic data, such as sepia issue and color fading. Besides, the structured defects still remain problematic, possibly because they cannot handle the old photo textures that are subtly different from the synthetic dataset. Pix2pix, which is comparable to our approach on synthetic images, however, is visually inferior to our method. Some film noises and structured defects still remain in the final output. This is due to the domain gap between synthetic images and real photos, which makes the method fail to generalize. In comparison, our method gives clean, sharp images with the scratches plausibly filled with unnoticeable artifacts. Besides successfully addressing the artifacts considered in data synthesis, our method can also enhance the photo color appropriately. In general, our method gives the most visually pleasant results and the photos after restoration appear like modern photographic images.

Figure 6: Ablation study for two-stage VAE translation.
Method Top 1 Top 2 Top 3 Top 4 Top 5
DIP [ulyanov2018deep] 2.75 6.99 12.92 32.63 69.70
CycleGAN [CycleGAN] 3.39 8.26 15.68 24.79 52.12
Sequential [dabov2009bm3d, nazeri2019edgeconnect] 3.60 20.97 51.48 83.47 93.64
Attention [suganuma2018attention] 11.22 28.18 56.99 75.85 89.19
Pix2Pix [pix2pixhd] 14.19 54.24 72.25 86.86 96.61
Ours 64.83 81.35 90.68 96.40 98.72
Table 2: User study results. The percentage (%) of user selection is shown.

User study   To better illustrate the subjective quality, we conduct a user study to compare with other methods. We randomly select 25 old photos from the test set, and let users to sort the results according to the restoration quality. We collect subjective opinions from 22 users, with the results shown in Table 2. It shows that our method is 64.86% more likely to be chosen as the first rank result, which shows clear advantage of our approach.

4.3 Ablation Study

In order to prove the effectiveness of individual technical contributions, we perform the following ablation study.

Latent translation with VAEs   Let us consider the following variants, with proposed components added one-by-one: 1) Pix2Pix which learns the translation in image-level; 2) two VAEs with an additional KL loss to penalize the latent space; 3) VAEs with two-stage training (VAEs-TS): the two VAEs are first trained separately and the latent mapping is learned thereafter with the two VAEs (not fixed); 4) our full model, which also adopts latent adversarial loss. We first calculate the Wasserstein distance [arjovsky2017wasserstein] between the latent space of old photos and synthetic images. Table 3 shows that distribution distance gradually reduces after adding each component. This is because VAEs yield more compact latent space, the two-stage training isolates the two VAEs, and the latent adversarial loss further closes the domain gap. A smaller domain gap will improve the model generalization to real photo restoration. To verify this, we adopt a blind image quality assessment metric, BRISQUE [mittal2012no], to measure photo quality after restoration. The BRISQUE score in Table 3 progressively improves by applying the techniques, which is also consistent with the visual results in Figure 6.

Method Pix2Pix VAEs VAEs-TS full model
Wasserstein 1.837 1.048 0.765 0.581
BRISQUE 25.549 23.949 23.396 23.016
Table 3: Ablation study of latent translation with VAEs.
Figure 7: Ablation study of partial nonlocal block. Partial nonlocal better inpaints the structured defects.
Figure 8: Ablation study of partial nonlocal block. Partial nonlocal does not touch the non-hole regions.

Partial nonlocal block   The effect of partial nonlocal block is shown in Figure 7 and 8. Because a large image context is utilized, the scratches can be inpainted with fewer visual artifacts and better globally consistent restoration can be achieved. Besides, the quantitative result in Table 1 also shows that the partial nonlocal block consistently improves the restoration performance on all the metrics.

Figure 9: Limitation. Our method cannot handle complex shading artifacts.

5 Discussion and Conclusion

We propose a novel triplet domain translation network to restore the mixed degradation in old photos. The domain gap is reduced between old photos and synthetic images, and the translation to clean images is learned in latent space. Our method suffers less from generalization issue compared with prior methods. Furthermore, we propose a partial nonlocal block which restores the latent features by leveraging the global context, so the scratches can be inpainted with better structural consistency. Our method demonstrates good performance in restoring severe degraded old photos. However, our method cannot handle complex shading as shown in Figure 9. This is because our dataset contains few old photos with such defects. One could possibly address this limitation using our framework by explicitly considering the shading effects during synthesis or adding more such photos as training data.

Acknowledgements: We would like to thank Xiaokun Xie for his help and anonymous reviewers for their constructive comments. This work was partly supported by Hong Kong ECS grant No.21209119, Hong Kong UGC.