Convolutional Neural Networks (CNNs) trained for visual classification were shown to learn powerful and meaningful feature spaces, encoding rich information ranging from low level features to high-level semantic content . Such features have been widely utilized by numerous methods for clustering, perceptual loss  and different image manipulation tasks [35, 12, 21, 2].
The process of working in feature space typically involves the following stages: an image is fed to a pre-trained classification network; its feature responses from different layers are extracted, and optionally manipulated according to the application at hand. The manipulated features are then inverted back to an image by solving a reconstruction optimization problem. However, the problem of inverting deep features into a realistic image is challenging – there is no one-to-one mapping between the deep features and an image, especially when the features are taken from deep layers. This has been addressed so far mostly by imposing regularization priors on the reconstructed image, which often leads to blurry unrealistic reconstructions and limits the type of features that can be used.
In this paper, to overcome the aforementioned limitations, we take the task of feature inversion to the realm of Generative Adversarial Networks (GANs). GANs have made a dramatic leap in modeling the distribution of natural images and are now capable of generating impressive realistic image samples. However, most existing GAN-based models that use deep features condition the generation only on objects’ class information [29, 27, 40, 7]. In contrast, we propose a novel generative model that utilizes the continuum of semantic information encapsulated in deep features; this ranges from low level information contained in fine features to high level, semantic information contained in deeper features. By doing so, we bridge the gap between optimization based methods for feature inversion and generative adversarial learning.
Inspired by classical image pyramid representations, we construct our model as a Semantic Generation Pyramid – a hierarchical GAN-based framework which can leverage information from different semantic levels of the features. More specifically, given a set of features extracted from a reference image, our model can generate diverse image samples, each with matching features at each semantic level of the classification model. This allows us to generate images with a gradual, controllable semantic similarity to a reference image (see Fig. 1 and Fig. 2).
The hierarchical nature of our model provides a versatile and flexible framework that can be used for a variety of semantically-aware image generation and manipulation tasks. Similar to classic image pyramid representations, this is done by manipulating features at different semantic levels, and controlling the pyramid levels in which features are fed to our model. We demonstrate this approach in a number of applications, including semantically-controlled inpainting, semantic compositing of objects from different images, and generating realistic images from grayscale, line-drawings or paintings. All these tasks are performed with the same unified framework, without any further optimization or fine tuning.
2 Related Work
Inverting deep features.
Inverting deep features back to images has been mostly studies in the context of interpretability and understanding of visual recognition networks. Simonyan et al.  formulated the feature inversion problem as an optimization problem where the objective is to minimize the distance between the mapping of the image to features (by the pre-trained network) and a given feature map. They apply back-propagation to minimize this objective – a slow process, which is highly sensitive to initialization. An important observation made by this process was how reconstructable an image is from various depths of CNN layers; first several layers are almost fully invertible, but the ability to reconstruct the input image quickly declines along the depth of the network.
Other optimization-based methods attempt to the reconstruction from deeper features by imposing different regularization priors on the reconstructed image [24, 28, 39]. However, these methods are able to reconstruct only a single, average image. Because there is no one-to-one mapping between deep features to an image, the reconstructed image is often blurry and unrealistic.
Dosovitskiy&Brox  propose to train a CNN to invert various image descriptors, among which are deep features. This approach also impose regularization, yet implicitly; the fact that an image was generated by a CNN forms a strong natural image prior [34, 11]. However, such regularization also tends to produce blurry unrealistic images when inverting the deeper features.
To overcome this limitation, a generative model for feature invertion was proposed by the same authors 
. Although they used a GAN, their model is still deterministic, i.e., generates only a single possible image from an input set of features. In this case the discriminator was used as a learned regularizer to try to impose realistic reconstructions from the generator. This resulted in images that have local structure that looks natural but distorted unrealistic global structure. The final image cannot be considered realistic (See examples in supplementary material). The authors mention a desire to add diversity and randomization to their process and report an attempt to inject noise for this purpose. This attempt failed due to the fact that ”Nothing in the loss function forces the generator to output multiple different reconstructions”. We address this problem using adiversity-loss.
Deep features for image manipulation.
Inverting deep features crossed the line from the field of interpretability and understanding to image manipulation. A general approach is to apply some manipulation to semantic features, and then invert back to pixels to project the manipulation to a resulting output image. Such manipulation include Texture-synthesis , Style-transfer 
, feature interpolation, demonstrated to apply aging to faces and recently also image retargeting by applying Seam Carving  to semantic features . In all of these works the output image is reconstructed by some variant of the iterative optimization process of  that takes time and is sensitive to initialization. Some solutions to speed up were proposed, like training a CNN to imitate the mapping of the optimization process .
Generative Adversarial Networks
In our work we make use of recent advances of Generative Adversarial Networks (GANs) . There has been huge progress in the quality of image generation using GANs [30, 29, 27, 40, 7]. Our GAN is based on Self-Attention GAN  with slight modifications. Differently from classical GANs, we perform image to image mapping using a conditional GAN similarly to [19, 36]
. There has been some recent impressive work on interpreting and controlling the outcomes of GANs by either interpreting neurons or by steering the latent space . Our approach differs due to the use of semantic features that originate in supervised classification networks.  makes use of classification feature maps to improve quality of classic generation tasks. They stack a set of GANs first trained separately to generate features of different levels and then combine them. We have different goal and setting of how to make use of the semantic features. Further analysis  has shown limitations on what GANs cannot generate. We introduce the application of Re-painting which allows regenerating selected parts of the image and by that enables keeping wanted parts of an image as is. To some practical uses, this overcomes the limitations presented in  (such as inability to generate humans).
Classical hierarchical image representations.
We draw inspiration to our method from the classical image processing approach of image pyramids, especially Laplacian pyramids [8, 1]. This method decomposes images to distinct frequency-bands thus allows frequency aware manipulation for stitching and fusion of images. reconstruction is fast and trivial. Our method is a semantic analogous to this approach. We aim to perform semantic manipulation and get immediate projection back to the image pixels.
Our goal is to design a generative image model that can fully leverage the feature space learned by a pre-trained classification network. More specifically, we opt to achieve the following objectives:
Leveraging features from different semantic levels. Given an input image, the deep features extracted from different layers have hierarchical structure – features extracted from finer layers of the model contain low level image information, while deeper features can encode higher level, semantic information . We would like to benefit from the continuum space of these features.
Flexibility and user controllability. We want to support various manipulation tasks at test time via editing in the deep feature space. For example, combining features from different images, or from different levels. The model then have to provide such user controllability and adapt to various manipulations for the features.
Diversity. We would like our model to learn the space of possible images that match a given set of input features, rather than producing a single image sample.
We next describe how we achieve these objectives via a unified GAN-based architecture and a dedicated training scheme.
Our generator works in full conjunction with a a pre-trained classification model, which we assume is given and fixed. In practice, we use VGG-16 model  trained on Places365 dataset  dataset. More specifically, given an input image , we feed it into the classification model and extract a set of features maps by taking the activation maps of different layers of the model. That is, , where dnotes the -th layer of the classification model. These features are then fused into our generator as follows.
Our generator’s architecture is loosely based on the class-conditioned GAN . However, we modify it to have a mirror structure w.r.t. the classification model, as illustrated in Fig. 3. More specifically, each residual-block in our generator corresponds to a single stage in the classification model (a stage consists of 2-3 conv layers + pooling). This structure forms a semantic generation pyramid
, which at the coarsest level takes a random noise vector,, as input. At each of the upper levels, our model can optionally receive features, , extracted from the corresponding level of the classification model. The flow of the features from the classification model to our generator is controlled at each level by an input mask . The mask can either pass the entire feature map (all ones), mask-out the entire feature map (all zeros) or pass regions from it.
To conclude, the input to the network is: (1) a set of deep features, computed by feeding an input image, , into the classification model and extracting the activation maps from different layers; (2) a noise vector, , which allows diversity and learning a distribution rather than a one-to-one mapping; (3) a set of masks , each for input feature ; these masks allows us to control, manipulate and leverage features from different semantic levels. Thus, the generator can be formulated by .
Fig. 3(b) depicts how the feature maps are fused into our generator. The goal is to combine information both from the current classification model layer and from previous generator blocks that originate in the noise vector. At each level, the feature map is first multiplied by its input mask . The masked feature map then undergoes a convolution layer and the result is summed with the result of the corresponding generator block. In cases where the entire feature map is masked, nothing is added to the result of the previous generator block. The mask itself is concatenated as another channel to allow the proceeding layers awareness and distinction between masked areas and empty areas.
3.2 Training procedure
Our goal is to have a single unified model that can generate images from any level of our semantic pyramid. In other words, we want to be able to generate diverse, high quality image samples from any subset of input features . We achieve this through the following training scheme.
At each training iteration, a batch of input images is sampled from the dataset and is being fed to classification model to compute their features. In our default training step, we randomly select a pyramid level, and feed to the generator only the features at that level, while masking out the features in all other levels.
However, we also want the ability to generate from a mixture of semantic levels, keeping some areas of the image while modifying other areas. We therefore train with spatially varying masks too. At some of the iterations (defined by a hyper-parameter probability), we incorporate spatially varying masks. Fig.4 depicts the masking procedure in such cases. First, random crop of the image is sampled. Then, for one randomly selected layer the mask is fully turned on. For all the other layers, closer to the input image, the mask is turned on except the sampled crop. This sort of training is oriented towards tasks of editing different spatial areas of the image differently.
We train all the levels in our pyramid architecture simultaneously, where our training loss consists of three terms given by:
The first term is an adversarial loss. That is, our generator is trained against a class-conditioned discriminator , similar to . We adopt the LSGAN  variant of this optimization problem. Formally,
is normal distribution for noise instances, andis the distribution for sampling masks as described above.
The second term is a semantic reconstruction loss, which encourages the output image to preserve the feature information used to generate it. This is similar to perceptual loss [21, 41]. More specifically, when feeding a generated image back to the classification model, we want to ensure that the resulting features will be as close as possible to the features extracted from the original image. To allow the model to generate diverse image samples from high level features, we apply this loss only to the features at levels that are used for generation (not masked out). Formally,
Where denotes the
layer of the classification model. Both the original feature maps and the reconstructed ones are normalized together so that comparison is agnostic to global color scaling. Furthermore, to allow more geometric diversity and not imposing pixel to pixel complete matching, we first apply max-pooling to both original and reconstructed feature maps, withwindows grid. So we are effectively only comparing the strongest activation in each window, allowing slight shifts in locations (which translate to bigger shifts in image pixels the deeper the feature map is).
Finally, is a diversity loss, based on . Specifically, each batch is divided into pairs of instances that have the same input image and masking (but different noise vectors). A regularization is applied such that the distance between two such generated results should be higher as the two noise vectors are distant one from the other.
3.4 Implementation details
We use VGG-16 
. The inputs to the generator are the feautres at the end of each stage (after the pooling layer). We also use the fully-connected layers, FC7 and FC8. To enable matching between the SA-GAN generator to the VGG classifier, we did not use FC6. We trained our model on Places365 dataset. We used the loss as indicated in Eq. 1 with and . The probability for training with missing crops was set to
. We used Tensorflow platform with TFGAN tool. We trained the model for approximately two days on Dragonfish TPU withtopology. We employed some methods from  such as class-embeddings and truncation trick. Optimization was by Adam optimizer  with learning rate of for both the generator and discriminator. Batch size of 1024 and 128 latent space dimensions as in .
4 Experiments and Applications
We tested our model on various natural images taken from Places365  and downloaded from the Web. Fig. 1 and Fig. 2 show several qualitative results of our generated images from increasing semantic levels. That is, for each pyramid level, we feed our generator with features extracted only from that level. In Fig. 2, we show for each example and for each semantic level two different random image samples, generated both from the same features, but with different noise instances.
As can be seen, the pyramid level from which the image is generated determines to which extent it diverges from the reference image – the fidelity to the reference image decreases with the pyramid level, whereas the diversity between different image samples increases. Nevertheless, for all generation levels, the semantic content of the original image is preserved.
Observing our generated images more closely reveals details about the distribution of images matching to a feature map at each level. For example, it is apparent that CONV4 layer is agnostic to mild lighting and color modifications but preserves geometric structures and textures. The fully connected layers are mostly agnostic to geometric global structure but preserve textures and local structures (e.g., bottom row of Fig. 2); the global shape of the road, the position of the island and the architecture of the castles has completely changed. However, local attributes such as the existence of small windows, towers remained (even though in different locations). This aligns with observations made by .
4.1 Quantitative Evaluation
We evaluated the quality of our generated images using two measures; Fréchet Inception Distance (FID)  and “Real/Fake” user study.
For FID, we used 6000 test images randomly sampled from Places365 . We extracted deep features by feeding these images to the classification model, and then generated random image samples from each semantic level separately, i.e., by inputting to our model only the features extracted from that level.
Table. 1 reports FID scores measured for our generation results from each semantic level. As expected, the finer the features’ level is the lower the FID score. For example, when generating images from CONV1 features the distribution of generated images almost perfectly aligns with the real images. As the features’ level increase, our generated images deviate more from the original images, which is reflected by a consistent increase in the FID scores.
Paired test: A generated image is presented against its corresponding reference image (i.e., the features used for generation are extracted from the reference). The workers were asked to select the fake one.
Unpaired test: A generated image is presented against some real unrelated image. The workers were asked asked to determine whether it is fake.
In each trail, images were presented for 1 sec. Each of these tests was performed by 100 raters, using 75 images randomly sampled from Places365 ; to prevent immediate discrimination between real and fake images, we did not include images with people in this test (generally, humans are not synthesized well by GANs , as well as by our model when fed with features from high semantic levels).
Table. 2 reports the confusion rate (percentage of fooled turkers) separately measured for images generated from each semantic level. Perfect confusion is 50%. This means, for example, that the images generated from CONV1 are almost indistinguishable from real ones. Similarly, About 17%-18% of the images generated from FC8 looked more genuine than the real ones shown.
4.2 Semantic Pyramid Image Manipulation
The pyramid structure of our model provides a flexible framework that can be used for various semantically-aware image generation and manipulation tasks. Similar to classic image pyramid representations [8, 1], this can be done by manipulating features at different semantic levels, and controlling the pyramid levels in which features are fed to our model. We next describe how this approach can be applied for a number of applications. Note that we use the same model, which was trained once and used at inference mode for all applications.
We introduce a new application, we name re-painting, where an image region can be re-generated, with a controllable semantic similarity to the original image content. In contrast to traditional in-painting, in which there is no information in the generated region (e.g., [4, 37, 33]), we utilize the information available in deep features for that region. In other words, re-painting allows us to re-sample the image content in a specific region based on its original content.
Fig. 6 shows a number of re-painting results. As can be seen, our model successfully replace the desired regions with diverse generated region samples, while preserving the content around it. This enables practical image manipulations such as generating the same hiker at various environments (Fig. 6 second column from left), or replacing a house with various other houses while keeping the same environment (rightmost column). These results demonstrate the ability of our network to fuse information from different levels at different image regions. Since our training procedure of the network combines generating from different semantic levels (different layers of the classifier) within the same image, our generator can produce plausible images when some spatial piece of the image is generated from a more semantic feature-map.
Fig. 4 depicts how re-painting is done at both training and inference. The feature map matching the semantic level we want to repaint from is fed to the generator. Then we mask out the wanted re-painted region from all the feature-maps that are closer to the input image (the less semantic).
Semantic image composition.
The technique introduced for re-painting can be expanded and used for the challenge of Semantic Image Composition. Namely, implanting an object or some image crop inside another image, such that the implanted object can change in order to fit its surroundings but still hold on to its semantic interpretation. Fig. 7 shows such examples. Note how the church changes its structure and color according to its surroundings. These are semantic changes as oppose to just matching textures and lighting; In the top example the church does not just match by texture, it is transformed to a temple that is more likely to be found in such surrounding. Composition is done similarly to Re-painting; the only difference is that for the chosen most semantic layer we use a naive pasting of the object on the image. Then we mask out the matching region from all the feature-maps that are closer to the input image.
Generation from unnatural reference image.
Fig. 5 demonstrates the effect of using a reference image which does not belong to the distribution, i.e. not a a natural RGB image. Since the generator was trained to output images that belong to the dataset distribution, we get an image to image translation. We demonstrate converting paintings to realistic photos, line drawings to images and coloring grayscale images. For each case we generate a diverse set of possible results. Differently from [19, 36]
, there is no exact matching of pixels between the reference image and the output. Inverting from CONV5 features has some degree of spatial freedom and allows modifying the structure. As an example, the rightmost colorization of the city replaced the tower with two towers.
We demonstrate a rather simple application using our semantic pyramid; we use CONV5 features from an input image but manually change the class label fed to the generator. Fig. 8 shows the effect of such manipulation. Our GAN based on  is class conditioned. The conditioning in the generator is enforced through the conditional batch norm layers as introduced in 
. This means that the difference between regular inversion and relabeling in the generator is carried out by just normalizing activations to different mean and variance.
5 Discussion and Conclusion
This work proposes a method to bridge the gap between semantic discriminative models and generative modelling. We demonstrated how our semantic generation pyramid can be used as a unified and versatile framework for various image generation and manipulation tasks. Our framework also allows exploring the sub-space of images matching specific semantic feature-maps. We believe that projecting semantic modification back to the pixels realistically, is a key for future work that involves image manipulation or editing through the semantic domain. We hope this work can guide and trigger further progress in utilizing semantic information in generative models.
-  (1984) 1984, Pyramid methods in image processing. RCA Engineer 29 (6), pp. 33–41. Cited by: §2, §4.2.
-  (2019) Image resizing by reconstruction from deep features. External Links: Cited by: §1, §2.
-  (2007) Seam carving for content-aware image resizing. ACM Trans. Graph. 26 (3), pp. 10. Cited by: §2.
-  (2009-08) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics (Proc. SIGGRAPH) 28 (3). Cited by: §4.2.
-  (2019) GAN dissection: visualizing and understanding generative adversarial networks. In Proceedings of the International Conference on Learning Representations (ICLR), Cited by: §2.
Seeing what a gan cannot generate.
Proceedings of the International Conference Computer Vision (ICCV), Cited by: §2, §4.1.
-  (2018) Large scale GAN training for high fidelity natural image synthesis. CoRR abs/1809.11096. External Links: Cited by: §1, §2, §3.4.
-  (1983) The laplacian pyramid as a compact image code. IEEE TRANSACTIONS ON COMMUNICATIONS 31, pp. 532–540. Cited by: §2, §4.2.
-  (2015) Inverting convolutional networks with convolutional networks. CoRR abs/1506.02753. External Links: Cited by: §2.
-  (2016) Generating images with perceptual similarity metrics based on deep networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, USA, pp. 658–666. External Links: Cited by: §2.
-  (2019-06) ”Double-dip”: unsupervised image decomposition via coupled deep-image-priors. Cited by: §2.
-  (2015) A neural algorithm of artistic style. External Links: Cited by: §1, §2.
-  (2015) Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.), pp. 262–270. External Links: Cited by: §2.
-  (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), pp. 2672–2680. External Links: Cited by: §2.
-  (2015) Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385. Cited by: §3.1.
-  (2017) GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, USA, pp. 6629–6640. External Links: Cited by: §4.1.
-  (2018) Deep convolutional networks do not classify based on global object shape. PLOS. Cited by: §4.
-  (2017) Stacked generative adversarial networks. In CVPR, Cited by: §2.
Image-to-image translation with conditional adversarial networks.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Cited by: §2, §4.1, §4.2.
-  (2019) On the ”steerability” of generative adversarial networks. arXiv preprint arXiv:1907.07171. Cited by: §2.
Perceptual losses for real-time style transfer and super-resolution. Lecture Notes in Computer Science, pp. 694–711. External Links: Cited by: §1, §2, §3.3.
-  (2014) Adam: A method for stochastic optimization. CoRR abs/1412.6980. Cited by: §3.4.
-  (2017-06) ImageNet classification with deep convolutional neural networks. Communications of the ACM 60 (6), pp. 84–90. External Links: Cited by: §1.
-  (2015-06) Understanding deep image representations by inverting them. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). External Links: Cited by: §2.
-  (2019) Mode seeking generative adversarial networks for diverse image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §3.3.
-  (2017) Least squares generative adversarial networks. In Computer Vision (ICCV), IEEE International Conference on, External Links: Cited by: §3.3.
-  (2018) Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, External Links: Cited by: §1, §2.
-  (2016-06) Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks. In , Cited by: §2.
-  (2017) Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pp. 2642–2651. External Links: Cited by: §1, §2.
-  (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. External Links: Cited by: §2.
-  (2013) Deep inside convolutional networks: visualising image classification models and saliency maps.. CoRR abs/1312.6034. External Links: Cited by: §2, §2.
-  (2015) Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, Cited by: §3.1, §3.4.
-  (2019-10) Boundless: generative adversarial networks for image extension. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §4.2.
-  (2017) Deep image prior. arXiv:1711.10925. Cited by: §2.
-  (2017-07) Deep feature interpolation for image content changes. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). External Links: Cited by: §1, §2.
-  (2018) High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §2, §4.2.
High-resolution image inpainting using multi-scale neural patch synthesis. External Links: Cited by: §4.2.
-  (2015) Understanding neural networks through deep visualization. In Deep Learning Workshop, International Conference on Machine Learning (ICML), Cited by: item 1.
-  (2015) Understanding neural networks through deep visualization.. CoRR abs/1506.06579. External Links: Cited by: §2.
-  (2019-09–15 Jun) Self-attention generative adversarial networks. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 7354–7363. External Links: Cited by: §1, §2, §3.1, §3.1, §3.3, §3.4, §4.2.
-  (2018) The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, Cited by: §1, §3.3.
-  (2016) Colorful image colorization. In ECCV, Cited by: §4.1.
Places: a 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §3.1, §3.4, §4.1, §4.1, §4.