contextualLoss
The Contextual Loss
view repo
Maintaining natural image statistics is a crucial factor in restoration and generation of realistic looking images. When training CNNs, photorealism is usually attempted by adversarial training (GAN), that pushes the output images to lie on the manifold of natural images. GANs are very powerful, but not perfect. They are hard to train and the results still often suffer from artifacts. In this paper we propose a complementary approach, whose goal is to train a feed-forward CNN to maintain natural internal statistics. We look explicitly at the distribution of features in an image and train the network to generate images with natural feature distributions. Our approach reduces by orders of magnitude the number of images required for training and achieves state-of-the-art results on both single-image super-resolution, and high-resolution surface normal estimation.
READ FULL TEXT VIEW PDF
Single image super-resolution is the task of inferring a high-resolution...
12/23/2016 ∙ by Mehdi S. M. Sajjadi, et al. ∙
0
∙
share
read it
Most current super-resolution methods rely on low and high resolution im...
09/20/2019 ∙ by Andreas Lugmayr, et al. ∙
11
∙
share
read it
Recently, it has been shown that deep neural networks can significantly
...
07/24/2019 ∙ by Zheng Hui, et al. ∙
3
∙
share
read it
We consider image transformation problems, where an input image is
trans...
03/27/2016 ∙ by Justin Johnson, et al. ∙
0
∙
share
read it
Single image super-resolution (SISR) is of great importance as a low-lev...
01/10/2019 ∙ by Jin Zhu, et al. ∙
0
∙
share
read it
Given an image, we wish to produce an image of larger size with signific...
06/03/2016 ∙ by Yaniv Romano, et al. ∙
0
∙
share
read it
In this paper we propose a global convex approach for image hallucinatio...
04/26/2013 ∙ by Peter Innerhofer, et al. ∙
0
∙
share
read it
“Facts are stubborn things, but statistics are pliable.” ― Mark Twain
Maintaining natural image statistics has been known for years as a key factor in the generation of natural looking images [1, 2, 3, 4, 5]. With the rise of CNNs, the utilization of explicit image priors was replaced by Generative Adversarial Networks (GAN) [6], where a corpus of images is used to train a network to generate images with natural characteristics, e.g., [7, 8, 9, 10]. Despite the use of GANs within many different pipelines, results still sometimes suffer from artifacts, balanced training is difficult to achieve and depends on the architecture [8, 11].
Well before the CNN era, natural image statistics were obtained by utilizing priors on the likelihood of the patches of the generated images [2, 5, 12]. Zoran and Weiss [5] showed that such statistical approaches, that harness priors on patches, lead in general to restoration of more natural looking images. A similar concept is in the heart of sparse-coding where a dictionary of visual code-words is used to constrain the generated patches [13, 14]. The dictionary can be thought as a prior on the space of plausible image patches. A related approach is to constrain the generated image patches to the space of patches specific to the image to be restored [15, 16, 17, 18]. In this paper we want to build on these ideas in order to answer the following question: Can we train a CNN to generate images that exhibit natural statistics?
The approach we propose is a simple modification to the common practice. As typically done, we as well train CNNs on pairs of source-target images by minimizing an objective that measures the similarity between the generated image and the target. We extend on the common practice by proposing to use an objective that compares the feature distributions rather than just comparing the appearance. We show that by doing this the network learns to generate more natural looking images.
A key question is what makes a suitable objective for comparing distributions of features. The common divergence measures, such as Kullback-Leibler (KL), Earth-Movers-Distance and
, all require estimating the distribution of features, which is typically done via Multivariate Kernel-Density-Estimation (MKDE). Since in our case the features are very high dimensional, and since we want a differentiable loss that can be computed efficiently, MKDE is not an option. Hence, we propose instead an approximation to KL that is both simple to compute and tractable. As it turns out, our approximation coincides with the recently proposed
Contextual loss [19], that was designed for comparing images that are not spatially aligned. Since the Contextual is actually an approximation to KL, it could be useful as an objective also in applications where the images are aligned, and the goal is to generate natural looking images.Since all we propose is utilizing the Contextual loss during training, our approach is generic and can be adopted within many architectures and pipelines. In particular, it can be used in concert with GAN training. Hence, we chose super-resolution as a first test-case, where methods based on GANs are the current state-of-the-art. We show empirically, that using our statistical approach with GAN training outperforms previous methods, yielding images that are perceptually more realistic, while reducing the number of required training images by orders of magnitude.
Our second test-case further proves the generality of this approach to data that is not images, and shows the strength of the statistical approach without GAN. Maintaining natural internal statistics has been shown to be an important property also in the estimation of 3D surfaces [15, 20, 21]. Hence, we present experiments on surface normal estimation, where the network’s input is a high-resolution image and its output is a map of normals. We successfully generate normal-maps that are more accurate than previous methods.
To summarize, the contributions we present are three-fold:
We show that the Contextual loss [19] can be viewed as an approximation to KL divergence. This makes it suitable for reconstruction problems.
We show that training with a statistical loss yields networks that maintain the natural internal statistics of images. This approach is easy to train and reduces significantly the training set size.
We present state-of-the-art results on both perceptual super-resolution and high-resolution surface normal estimation.
Our approach is very simple, as depicted in Figure 2. To train a generator network we use pairs of source and target images. The network outputs an image , that should be as similar as possible to the target . To measure the similarity we extract from both and dense features and
, respectively, e.g., by using pretrained network or vectorized RGB patches. We denote by
andthe probability distribution functions over
and , respectively. When the patch set correctly models then the underlying probabilities and are equal. Hence, we need to train by minimizing an objective that measures the divergence between and .There are many common measures for the divergence between two distributions, e.g., , Kullback-Leibler (KL), and Earth Movers Distance (EMD). We opted for the KL-divergence in this paper.
The KL-divergence between the two densities and is defined as:
(1) |
It computes the expectation of the logarithmic difference between the probabilities and , where the expectation is taken using the probabilities . We can rewrite this formula in terms of expectation:
(2) |
This requires approximating the parametrized densities from the feature sets and . The most common method for doing this is to use Multivariate Kernel Density Estimation (MKDE), that estimates the probabilities and as:
(3) |
Here is an affinity measure (the kernel) between some point and the samples , typically taken as standard multivariate normal kernel , with , is the dimension and is a bandwidth matrix. We can now write the expectation of the KL-Divergence over the MKDE with uniform sample grid as follows:
(4) |
A common simplification that we adopt is to compute the MKDE not over a regular grid of but rather on the samples directly, i.e., we set , which implies and . Putting this together with Eq. (3) into Eq. (4) yields:
(5) |
To use Eq. (5
) as a loss function we need to choose a kernel
. The most common choice of a standard multivariate normal kernel requires setting the bandwidth matrix , which is non-trivial. Multivariate KDE is known to be sensitive to the optimal selection of the bandwidth matrix and the existing solutions for tuning it require solving an optimization problem which is not possible as part of a training process. Hence, we next propose an approximation, which is both practical and tractable.Our approximation is based on one further observation. It is insufficient to produce an image with the same distribution of features, but rather we want additionally that the samples will be similar. That is, we ask each point to be close to a specific .
To achieve this, while assuming that the number of samples is large, we set the MKDE kernel such that it approximates a delta function. When the kernel is a delta, the first log term of Eq. (5) becomes a constant since:
(6) |
The kernel in the second log term of Eq. (5) becomes:
(7) |
We can thus simplify the objective of Eq. (5):
(8) |
where we denote . Next, we suggest two alternatives for the kernel , and show that one implies that the objective of Eq. (8) is equivalent to the Chamfer Distance [22], while the other implies it is equivalent to the Contextual loss of [19].
As it turns out, the objective of Eq. (8) is identical to the Contextual loss recently proposed by [19]. Furthermore, they set the kernel to be close to a delta function, such that it fits Eq. (7). First the Cosine (or L2) distances are computed between all pairs ,. The distances are then normalized: (with ), and finally the pairwise affinities are defined as:
(9) |
where is a scalar bandwidth parameter that we fix to , as proposed in [19]. When using these affinities our objective equals the Contextual loss of [19] and we denote .
A simpler way to set is to take a Gaussian kernel with a fixed s.t. . This choice implies that minimizing Eq. (8) is equivalent to minimizing the asymmetric Chamfer Distance [22] between defined as
(10) |
CD has been previously used mainly for shape retrieval [23, 24] were the points are in . For each point in set , CD finds the nearest point in the set and minimizes the sum of these distances. A downside of this choice for the kernel is that it does not satisfy the requirement in Eq. (7).
Affinity value | (a) The Contextual loss [19] | (b) Chamfer distance [22] |
To provide intuition on the difference between the two choices of affinity functions we present in Figure 3 an illustration in 2D. Recall, that both the Contextual loss and the Chamfer distance CD find for each point in a single match in , however, these matches are computed differently. CD selects the closet point, hence, we could get that multiple points in are matched to the same few points in . Differently, computes normalized affinities, that consider the distances between every point to all the points . Therefore, it results in more diverse matches between the two sets of points, and provides a better measure of similarity between the two distributions. Additional example is presented in the supplementary.
Training with guides the network to match between the two point sets, and as a result the underlying distributions become closer. In contrast, training with CD, does not allow the two sets to get close. Indeed, we found empirically that training with CD does not converge, hence, we excluded it from our empirical reports.
The Contextual loss has been proposed in [19] for measuring similarity between non-aligned images. It has therefore been used for applications such as style transfer, where the generated image and the target style image are very different. In the current study we assert that the Contextual loss can be viewed as a statistical loss between the distributions of features. We further assert that using such a loss during training would lead to generating images with realistic characteristics. This would make it a good candidate also for tasks where the training image pairs are aligned.
To support these claims we present in this section two experiments, and in the next section two real applications. The first experiment shows that minimizing the Contextual loss during training indeed implies also minimization of the KL-divergence. The second experiment evaluates the relation between the Contextual loss and human perception of image quality.
Since we are proposing to use the Contextual loss as an approximation to the KL-divergence, we next show empirically, that minimizing it during training also minimizes the KL-divergence.
To do this we chose a simplified super-resolution setup, based on SRResNet [9], and trained it with the Contextual loss as the objective. The details on the setup are provided in the supplementary. We compute during the training the Contextual loss, the KL-divergence, as well as five other common dissimilarity measures. To compute the KL-divergence, EMD and , we need to approximate the density of each image. As discussed in Section 2, it is not clear how the multivariate solution to KDE can be smoothly used here, therefore, instead, we generate a random projection of all patches onto 2D and fit them using KDE with a Gaussian kernel in 2D [25]. This was repeated for 100 random projections and the scores were averaged over all projections (examples of projections are shown in the supplementary).
Figure 4 presents the values of all seven measures during the iterations of the training. It can be seen that all of them are minimized during the iterations. The KL-divergence minimization curve is the one most similar to that of the Contextual loss, suggesting that the Contextual loss forms a reasonable approximation.
Our ultimate goal is to train networks that generate images with high perceptual quality. The underlying hypothesis behind our approach is that training with an objective that maintains natural statistics will lead to this goal. To asses this hypothesis we repeated the evaluation procedure proposed in [26]
, for assessing the correlation between human judgment of similarity and loss functions based on deep features.
In [26] it was suggested to compute the similarity between two images by comparing their corresponding deep embeddings. For each image they obtained a deep embedding via a pre-trained network, normalized the activations and then computed the distance. This was then averaged across spatial dimension and across all layers. We adopted the same procedure while replacing the distance with the Contextual loss approximation to KL.
Our findings, as reported in Figure 5 (the complete table is provided in the supplementary material), show the benefits of our proposed approach. The Contextual loss between deep features is more closely correlated with human judgment than or Chamfer Distance between the same features. All of these perceptual measures are preferable over low-level similarity measures such as SSIM [27].
In this section we present two applications: single-image super-resolution, and high-resolution surface normal estimation. We chose the first to highlight the advantage of using our approach in concert with GAN. The second was selected since its output is not an image, but rather a surface of normals. This shows the generic nature of our approach to other domains apart from image generation, where GANs are not being used.
To asses the contribution of our suggested framework for image restoration we experiment on single-image super-resolution. To place our efforts in context we start by briefly reviewing some of the recent works on super-resolution. A more comprehensive overview of the current trends can be found in [28].
Recent solutions based on CNNs can be categorized into two groups. The first group relies on the or losses [9, 29, 30], which lead to high PSNR and SSIM [27] at the price of low perceptual quality, as was recently shown in [31, 26]
. The second group of works aim at high perceptual quality. This is done by adopting perceptual loss functions
[32], sometimes in combination with GAN [9] or by adding the Gram loss [33], which nicely captures textures [34].Our main goal is to generate natural looking images, with natural internal statistics. At the same time, we do not want the structural similarity to be overly low (the trade-off between the two is nicely analyzed in [31, 26]). Therefore, we propose an objective that considers both, with higher importance to perceptual quality. Specifically, we integrate three loss terms: (i) The Contextual loss – to make sure that the internal statistics of the generated image are similar to those of the ground-truth high-resolution image. (ii) The loss, computed at low resolution – to drive the generated image to share the spatial structure of the target image. (iii) Finally, following [9] we add an adversarial term, which helps in pushing the generated image to look “real”.
Given a low-resolution image and a target high-resolution image , our objective for training the network is:
(11) |
where in all our experiments , , and . The images are low-frequencies obtained by convolution with a Gaussian kernel of width and
. For the Contextual loss feature extraction we used layer
of VGG19 [35].We adopt the SRGAN architecture [9]^{1}^{1}1We used the implementation in https://github.com/tensorlayer/SRGAN and replace only the objective. We train it on just 800 images from the DIV2K dataset [36]
, for 1500 epochs. Our network is initialized by first training using only the
loss for 100 epochs.Empirical evaluation was performed on the BSD100 dataset [37]. As suggested in [31] we compute both structural similarity (SSIM [27]) to the ground-truth and perceptual quality (NRQM [38]). The “ideal” algorithm will have both scores high.
Table 1 compares our method with three recent solutions whose goal is high perceptual quality. It can be seen that our approach outperforms the state-of-the-art on both evaluation measures. This is especially satisfactory as we needed only images for training, while previous methods had to train on tens or even hundreds of thousands of images. Note, that the values of the perceptual measure NRQM are not normalized and small changes are actually quite significant. For example the gap between [9] and [34] is only 0.014 yet visual inspection shows a significant difference. The gap between our results and [34] is 0.08, i.e., almost 6 times bigger, and visually it is significant.
Method | Loss | Distortion | Perceptual | # Training |
---|---|---|---|---|
function | SSIM [27] | NRQM [38] | images | |
Johnson[32] | 0.631 | 7.800 | 10K | |
SRGAN[9] | + | 0.640 | 8.705 | 300K |
EnhanceNet[34] | ++ | 0.624 | 8.719 | 200K |
Ours full | ++ | 0.643 | 8.800 | 800 |
Ours w/o | + | 0.67 | 8.53 | 800 |
Ours w/o | + | 0.510 | 8.411 | 800 |
SRGAN-MSE* | + | 0.643 | 8.4 | 800 |
*our reimplementation |
Bicubic | EnhanceNet[34] | SRGAN[9] | ours | HR |
Figure 6 further presents a few qualitative results, that highlight the gap between our approach and previous ones. Both SRGAN [9] and EnhanceNet [34] rely on adversarial training (GAN) in order to achieve photo-realistic results. This tends to over-generate high-frequency details, which make the image look sharp, however, often these high-frequency components do not match those of the target image. The Contextual loss, when used in concert with GAN, reduces these artifacts, and results in natural looking image patches.
An interesting observation is that we achieve high perceptual quality while using for the Contextual loss features from a mid-layer of VGG19, namely . This is in contrast to the reports in [9] that had to use for SRGAN the high-layer for the perceptual loss (and failed when using low-layer such as ). Similarly, EnhanceNet required a mixture of and .
Texture image | Ground-truth | CRN | Ours (w/ ) | Ours (w/o ) |
The framework we propose is by no means limited to networks that generate images. It is a generic approach that could be useful for training networks for other tasks, where the natural statistics of the target should be exhibited in the network’s output. To support this, we present a solution to the problem of surface normal estimation – an essential problem in computer vision, widely used for scene understanding and new-view synthesis.
The task we pose is to estimate the underlying normal map from a single monocular color image. Although this problem is ill-posed, recent CNN based approaches achieve satisfactory results [40, 41, 42, 43] on the NYU-v2 dataset [44]. Thanks to its size, quality and variety, NYU-v2 is intensively used as a test-bed for predicting depth, normals, segmentation etc. However, due to the acquisition protocol, its data does not include the fine details of the scene and misses the underlying high frequency information, hence, it does not provide a detailed representation of natural surfaces.
Therefore, we built a new dataset of images of surfaces and their corresponding normal maps, where fine details play a major role in defining the surface structure. Examples are shown in Figure 7. Our dataset is based on 182 different textures and their respective normal maps that were collected from the Internet^{2}^{2}2www.poliigon.com and www.textures.com, originally offered for usages of realistic interior home design, gaming, arts, etc. For each texture we obtained a high resolution color image () and a corresponding normal-map of a surface such that its underlying plane normal points towards the camera (see Figure 7). Such image-normals pairs lack the effect of lighting, which plays an essential role in normal estimation. Hence, we used Blender^{3}^{3}3www.blender.org, a 3D renderer, to simulate each texture under different point-light locations. This resulted in a total of pairs of image-normals. The textures were split into for training and for testing, such that the test set includes all the rendered pairs of each included texture.
The collection offers a variety of materials including wood, stone, fabric, steel, sand, etc., with multiple patterns, colors and roughness levels, that capture the appearance of real-life surfaces. While some textures are synthetic, they look realistic and exhibit imperfections of real surfaces, which are translated into the fine-details of both the color image as well as its normal map.
We propose using an objective based on a combination of three loss terms: (i) The Contextual loss – to make sure that the internal statistics of the generated normal map match those of the target normal map. (ii) The loss, computed at low resolution, and (iii) The loss. Both drive the generated normal map to share the spatial layout of the target map. Our overall objective is:
(12) |
where, , and . The normal-maps are low-frequencies obtained by convolution with a Gaussian kernel of width and . We tested with both and , which removes the third term.
We chose as architecture the Cascaded Refinement Network (CRN) [39] originally suggested for label-to-image and was shown to yield great results in a variety of other tasks [19]. For the contextual loss we took as features
patches of the normal map (extracted with stride 2) and layers
of VGG19. In our implementation we reduced memory consumption by random sampling of all three layers into features.Table 2 compares our results with previous solutions. We compare to the recently proposed PixelNet [40], that presented state-of-the-art results on NYU-v2. Since PixelNet was trained on NYU-v2, which lacks fine details, we also tried fine-tuning it on our dataset. In addition, we present results obtained with CRN [39]. While the original CRN was trained with both and the perceptual loss [32], this combination provided poor results on normal estimation. Hence, we excluded the perceptual loss and report results with only as the loss, or when using our objective of Eq. (12). It can be seen that CRN trained with our objective (with ) leads to the best quantitative scores.
Mean | Median | RMSE | ||||
---|---|---|---|---|---|---|
Method | () | () | () | (%) | (%) | (%) |
PixelNet [40] | 25.96 | 23.76 | 30.01 | 22.54 | 50.61 | 65.09 |
PixelNet [40]+FineTune | 14.27 | 12.44 | 16.64 | 51.20 | 85.13 | 91.81 |
CRN [39] | 8.73 | 6.57 | 11.74 | 74.03 | 90.96 | 95.28 |
Ours (without ) | 9.67 | 7.26 | 12.94 | 70.39 | 89.84 | 94.73 |
Ours (with ) | 8.59 | 6.50 | 11.54 | 74.61 | 91.12 | 95.28 |
CRN |
||||||
---|---|---|---|---|---|---|
Ours |
(with L1) |
|||||
Ours |
(w/o ) |
|||||
GT |
We would like to draw your attention to the inferior scores obtained when removing the term from our objective (i.e., setting in Eq. (12)). Interestingly, this contradicts what one sees when examining the results visually. Looking at the reconstructed surfaces reveals that they look more natural and more similar to the ground-truth without . A few examples illustrating this are provided in Figures 7, 8. This phenomena is actually not surprising at all. In fact, it is aligned with the recent trends in super-resolution, discussed in Section 4.1, where perceptual evaluation measures are becoming a common assessment tool. Unfortunately, such perceptual measures are not the common evaluation criteria for assessing reconstruction of surface normals.
Finally, we would like to emphasize, that our approach generalizes well to other textures outside our dataset. This can be seen from the results in Figure 1 where two texture images, found online, were fed to our trained network. The reconstructed surface normals are highly detailed and look natural.
In this paper we proposed using loss functions based on a statistical comparison between the output and target for training generator networks. It was shown via multiple experiments that such an approach can produce high-quality and state-of-the-art results. While we suggest adopting the Contextual loss to measure the difference between distributions, other loss functions of a similar nature could (and should, may we add) be explored. We plan to delve into this in our future work.
Acknowledgements the Israel Science Foundation under Grant 1089/16 and by the Ollendorf foundation.
Journal of Machine Learning Research (2010)
Image style transfer using convolutional neural networks.
In: CVPR. (2016)