Top-down Flow Transformer Networks

12/06/2017 ∙ by Zhiwei Jia, et al. ∙ 0

We study the deformation fields of feature maps across convolutional network layers under explicit top-down spatial transformations. We propose top-down flow transformer (TFT) by focusing on three transformations: translation, rotation, and scaling. We learn flow transformation generators that are able to account for the hidden layer deformations while maintaining the overall consistency across layers. The learned generators are shown to capture the underlying feature transformation processes that are independent of the particular training images. We observe favorable experimental results compared to the existing methods that tie transformations to fixed datasets. A comprehensive study on various datasets including MNIST, shapes, and natural images with both inner and inter datasets (trained on MNIST and validated in a number of datasets) evaluation demonstrates the advantages of our proposed TFT framework, which can be adopted in a variety of computer vision applications.



There are no comments yet.


page 4

page 6

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Over the past few years, deep neural networks

(Krizhevsky, Sutskever, and Hinton, 2012; He et al., 2016) have led to tremendous performance improvement on large-scale image classification (Russakovsky et al., 2015a) and other computer vision applications (Girshick et al., 2014; Goodfellow et al., 2014; Long, Shelhamer, and Darrell, 2015; Xie and Tu, 2015; Dosovitskiy et al., 2015)

. While convolutional neural networks (CNNs) have shown great promise in solving many challenging vision problems, there remain fundamental questions about the transparency of representations in current CNN architectures. While the explicit role of the top-down process is a critical issue in perception and cognition, it has received less attention within the current CNN literature.

Currently, both training and testing of CNNs is performed in a data-driven manner by passing convolved features from lower layers to the top layers. However, visual perception systems are shown to engage both bottom-up and top-down processes (Ridley Stroop, 1992; Hill and Johnston, 2007)

. A top-down process would allow explicit generation and inference of transformations and (high level) configuration changes of images that is otherwise not convenient in a bottom-up process. For example, suppose we wish to train a CNN classifier to detect the translation of an object in an image. A data-driven way to train this CNN would require generating thousands of samples by moving the object around in the image. However, a top-down model, if available, can directly detect translation using two parameters of the translation. Computational models realizing the bottom-up and top-down visual inference have been previously proposed

(Marr, 2010; Kersten, Mamassian, and Yuille, 2004; Tu et al., 2003)

. They, however, are not readily integrated into end-to-end deep learning frameworks. Recurrent neural networks (RNN)

(Elman, 1991; Hochreiter and Schmidhuber, 1997) have feedbacks recursively propagated between the output and input layers, but they miss explicit top-down generation.

Motivated by the recent development in CNNs (Krizhevsky, Sutskever, and Hinton, 2012) that learn effective hierarchical representations, the general pattern theory (Grenander, 1993; Yuille, Hallinan, and Cohen, 1992; Zhu, Mumford, and others, 2007) that provides rigorous mathematical formulation for top-down generations, as well as findings from cognitive perception (Gregory, 1980; Dodwell, 1983; Gibson, 2002), we seek to build a top-down generator, under controllable parameters, that operates directly on the feature maps of the internal CNN layers to model and account for the underlying transformations.

In this paper, we pay particular attention to CNN features under top-down transformations. There often exist clear flow fields computed between the extracted features of the original image and those of the transformed images (after rotation, scaling, and translation) (Gallagher, Tang, and Tu, 2015). Such a pattern of consistent but nontrivial feature map deformations throughout the convolutional layers is a key topic to be studied and leveraged here. Our goal is to discover and model operations in CNNs that lead to non-linear activity of the resulting flow fields. Given a source image and a transformed image under translation, rotation, and scaling, the internal CNN feature maps across multiple-layers can be directly computed; feature transformers modeled using an aggregated convolution strategy are themselves learned to perform mappings that transfer the feature maps of source image to the target image across all the intermediate CNN layers.

The training process is supervised since we generate transformed images using different parameters for translation, rotation, and scaling. Given that no supervision is needed to have the explicit correspondences at the feature map level and transforming the source images can be readily accomplished by using the explicit transformation parameters, obtaining the training data can be done effortlessly. The learned top-down feature transformer (TFT) however demonstrates great generalization capable of transforming images that are not seen during training — a benefit of having a top-down generator that is not tied to specific images. TFT is therefore distinct from existing work (Dosovitskiy, Springenberg, and Brox, 2015; Reed et al., 2015; Gardner et al., 2015) where transformations are learned with strong coupling to the training images that are hard to generalize to novel ones. For example, TFT is used learned to perform three kinds of flow transformations (rotation, translation and scaling) with arbitrary transformation parameters. Moreover, it generalizes well to various datasets of different input dimensions, ranging from small patterns to natural images. We also demonstrate TFT on the artistic style transferring task.

Figure 1: A schematic illustration of our top-down feature transformer framework (TFT).

In the experiments, we train the proposed top-down feature transformer (TFT) on the MNIST dataset. We demonstrate that our TFT learns intrinsic flow transformations that are not tied to a specific dataset by “inverting” transformed CNN features of images taken from several non-MNIST datasets. Comparison with the competing transformer (Reed et al., 2015) shows the immediate benefit our approach. We further train TFT on images synthesized from PASCAL VOC 2012 (Everingham et al., ) and utilize it to fine-tune VGG-16 (Simonyan and Zisserman, 2014)

via network internal data augmentation to improve ImageNet

(Russakovsky et al., 2015b) classification performance. Lastly, we adapt our method to image style transfer to show that our method can be extended beyond spatial transformations.

Significance and Related Work

We first discuss the significance of our proposed top-down feature transformer (TFT).

Why a top-down generator? Top-down information can play a fundamental role in unraveling, understanding, and enriching the great representation power of deep convolutional neural networks to bring more transparency and interoperability. The feature flow maps learned by TFT on novel images show promise for the direction of top-down learning.

Why transform features across CNN layers? We are intrigued by the idea to understand how the CNN features change internally with respect to the spatial transformations. Capturing the generic feature flows under spatial transformations, our proposed TFT will aid in understanding the representation learned by a CNN in order to improve its robustness and to enrich its modeling and computing capabilities. This is not immediately available in the existing frameworks (Dosovitskiy, Springenberg, and Brox, 2015; Reed et al., 2015) where the models are heavily coupled with the specific training data.

Why not Spatial Transform Networks (STN)

(Jaderberg et al., 2015)? In (Jaderberg et al., 2015), a spatial transformer was developed to explicitly account for the spatial manipulation of the data. This differentiable module can be inserted into existing CNNs, giving neural networks the ability to actively transform the image. However, the main goal of (Jaderberg et al., 2015)

is to learn a differentiable transformation field through backpropagation to account for the spatial transformations using a localization network for classification to obtain the transformation parameters including rotation, scaling, translation, and sheering. In short, STN learns to perform spatial transformation,

without controllable/user specified transformation parameters, to match the output features whereas TFT studies the generator (with user specified transformation parameters) modeling the underlying changes to the feature maps in the result of spatial transformations. In the experiments section, we show that STN is indeed not designed for and not easily extendable for TFT’s tasks. Furthermore, we extend TFT to the style transfer problem to show the flexibility of our method. In the experiments (section 5.1), we show a direct comparison to STN for modeling spatial transformations with controllable parameters.

Why not to perform spatial transformation directly? The consistency and integrity of an image after a direct e.g. spatial transformation on the input image space cannot be all maintained. For example rotating or zooming out a certain part will create ghost regions/holes that need to be filled up. Performing learning-based feature transformation alleviates the problem of creating artifacts due to image transformation, as shown in Figure 5

. In addition, TFT is not limited to just spatial transformation and it can be applied to many other tasks where feature transformation is needed such as online data augmentation, transfer learning, and image transformation. In Figure

6, we show some results of TFT being applied to image style transfer. Moreover, we focus on the intrinsic feature change after top-down transformation, which is an important step towards understanding the transparency and unraveling the interpretability of CNN.

Next, we discuss related work.

1. Deep image analogy. The deep visual analogy-making work (Reed et al., 2015) shows impressive results to learn to transform and compose images. However, (Reed et al., 2015; Gardner et al., 2015) build heavily on an encoder-decoder strategy that is strongly tied to the training images; it learns to output results in the image space without modeling the internal feature maps. Applying learned transformer (Reed et al., 2015) to novel images therefore leads to unsatisfactory results, as shown in the experiment section. Instead, our approach studies flow transformations on CNN features and generalizes well.

2. Learning image transformations. Learning to transform images has been a quite active research area recently. Existing methods that target on building transformation generators (Jaderberg et al., 2015; Lin and Lucey, 2016; Gregor et al., 2015; Kulkarni et al., 2015; Wu et al., 2017) train CNN/RNN to perform transformation on the the output feature or the image space, which is different from our goal of studying the intrinsic feature transformations under top-down process within the CNN networks. The key difference between TFT and STN (Jaderberg et al., 2015) has been discussed previously.

3. Feature transformation with SIFT-flow (Gallagher, Tang, and Tu, 2015). An earlier attempt (Gallagher, Tang, and Tu, 2015) has been made to study the feature layer deformation under explicit transformations based on flows computed from the SIFT-flow method (Liu, Yuen, and Torralba, 2011). Although the work (Gallagher, Tang, and Tu, 2015)

has a similar big picture idea to our work here, but it is preliminary and builds transformation solely based on the SIFT-flow estimation

(Liu, Yuen, and Torralba, 2011). It is therefore limited in several aspects: (1) only able to morph the features but cannot change the values; (2) may fail if SIFT-flow does not provide reliable estimation; (3) hard to generalize to arbitrary translation, rotation, and scaling. It relies heavily on the carefully chosen parameters that do not work well under general situations. Our approach has a better learning capability than that of (Gallagher, Tang, and Tu, 2015).

4. Generative Models. We also discuss the existing literature in generative modeling. A family of mathematically well defined generators are defined in (Grenander, 1993) as the general pattern theory. Its algorithmic implementation however still needs a great deal of further development. Methods (Blake and Yuille, 1993; Cootes, Edwards, and Taylor, 2001; Wu et al., 2010; Zhu, Mumford, and others, 2007) developed prior to the deep learning era are inspiring, but they have limited modeling capabilities. Deep belief net (DBN) (Hinton, Osindero, and Teh, 2006) and generative adversarial networks (GAN) (Goodfellow et al., 2014) do not study the explicit top-down generator for the image transformation. Other generators that perform feed-forward mapping (Dosovitskiy, Springenberg, and Brox, 2015; Wu et al., 2016; Zhang et al., 2017) have transformations as input parameters. The process in (Dosovitskiy, Springenberg, and Brox, 2015) maps directly from the input parameter space to the output image space; it cannot be applied to generate novel categories and does not study the intrinsic transformation.

5. Flow Estimation. Existing works in flow estimation (Liu, Yuen, and Torralba, 2011; Dosovitskiy et al., 2015) are used to perform flow estimation, not as a generator for feature transformations. Our frameworks examine the transformations in CNN features that can be applied to generate novel images from various datasets and to perform data augmentation in a network internal fashion.

Top-down Feature Transformer


The network comprises three layers: an aggregated feature transformation layer, an affine layer performing spatial transformation, and another aggregated feature transformation layer (which does not share weights with previous layers). Consider the feature map

of a convolutional layer with channels from a CNN that is pretrained for a discriminative task (e.g., image classification) by feeding an image . The aggregated feature transformation layer is given as:

where ’s are parameters governing the weight of each branch, is an integer referred as the number of transformation functions for aggregation. Each transformation function

is defined as a convolutional network, where each layer has a convolution operation followed by the rectified linear (ReLU) activation.

The transformed feature maps are then fed into an affine layer that applies spatial transformations, including translation, rotation and scaling, to each channel individually (with bilinear interpolations). The spatial transformations are modeled by the product of three transformation matrices:

, , and where , and are top-down controls of rotation, scaling and translation, respectively. The results are further transformed by another aggregated residual layer to generate the output CNN features. Figure 1 illustrates the architecture. We build the TFT for each convolutional layer in the pre-trained CNN. For clarity, we denote the collection of the overall transformation parameters as .

In the case of modeling spatial transformation, we manually set and half of the to and set the other half to . This configuration enables our method to model both the emerging patterns and the disappearing ones in CNN feature maps when spatial transformations are applied to the input images. We use differnt for the task of image style transfer, as discussed below.

Model Training

We train our proposed networks in a supervised manner by minimizing the average Euclidean distance between the generated feature maps and the ground truth feature maps for any image and any transformation parametrized by in the training set. The ground truth features maps can be collected automatically from CNN features of input images that are under the corresponding spatial transformations. In specific, for an image and its transformed version under the transformation parametrized by , is given as where is the generated feature maps by our TFT and is the ground truth feature maps from the input image .

Generating New Images

Upon obtaining the transformed feature maps for some transformation parametrized by , we can generate images by “inverting” them in CNNs, similar to the process in (Gatys, Ecker, and Bethge, 2015), i.e., via back-propagation. We use transformed featrues from a set of layers altogether to generate images, with a TFT trained for each layer. One resulting benefit is better quality of generated images.

In practice, given a set of CNN layers , where represents the convolutional layer, we generate images by minimizing the combined representation loss where and

are hyperparameters,

represents the feature maps from the CNN layer of the input image . Similar to that of (Mahendran and Vedaldi, 2015), we also add a regularization term .

Network Internal Data Augmentation

As suggested in (Gallagher, Tang, and Tu, 2015), the learned generators can be applied to perform data augmentation inside the CNNs. Instead of feeding in newly generated images for CNN training, we can directly perform internal data augmentation in an online manner by applying learned TFT to transform the CNN features. In our experiments, we perform fine-tuning on CNN via TFT trained upon the same CNN; by this means, the proposed TFT that learns to capture CNN feature flow under spatial transformation can be used to improve CNN’s invariance to such spatial transformations. We examine this in the experiments section.

Flow Field Calculation

A feature map with just one black pixel and all other pixels white, referred as a unit feature map, is fed into the feature transformation model. The location of the black pixel is the starting point of a flow. The center of mass of the output feature map is calculated as the end point of the flow. Evenly spaced starting points are selected to calculate flows across a feature map space to generate a flow map of a certain transformation under a feature transformation model. Our TFT is shown to learn clear flow fields while achieving non-linear transformations across CNN channels (Figure


Image Style Transfer

We adapt our proposed TFT to the task of image style transfer, where different kinds of styles are the new top-down control. In specific, we always feed an identity operation to the affine transformation layer as style transfer does not involve global spatial operations; we turn , the weight associated with each feature transformation function, to values generated from style variable

(one-hot encoded) via a two-layer fully-connected regression network. We find out that simply setting

to 0 (discarding the residual connection) helps to make training more stable, likely due to the task of image style transfer. This experiment aims to show that TFT is easily extendable beyond spatial transformations, and we choose the total number of styles to be 10.

Top-down Generator Design

Our proposed TFT utilizes a combination of CNN-based feature transformations and explicit spatial transformations across all feature channels. As a result, TFT is particularly good at capturing the underlying flow transformation of the CNN feature maps. Our ablation studies indicate that removing any of the modules will result in poor performance of this task. Our method generalizes well to several datasets without compromising the promise of clear feature flow fields. The top-down approach with the use of controllable parameters makes our model very data-efficient compared to existing methods, as demonstrated below in the experiment section.

The top-down approach remains effective and efficient in modeling underlying feature transformations even when we extend our method beyond spatial transformations by adapting the model, e.g., to the task of image style transfer. Specifically in our design, image styles as the top-down information directly controls weights associated with each of the feature transformation functions. These functions serve as basis in the image style space. It turns out that only a few image data are adequate for our method to learn and generate relatively good transformed images.


We perform three experiments. The first two focus on modeling spatial transformations; the last one focus on image style transformations.

In the first experiment, we train our top-down feature transformer (TFT) on the MNIST dataset and evaluate the learned generator on images from the notMNIST dataset. Under all three studied transformations, namely rotation, translation and scaling, we generate new images from transformed feature maps by applying TFT. We compare our results to those of (Reed et al., 2015) (DVAM) on MNIST and notMNIST dataset. We achieve better results both graphically and numerically. Our framework is able to perform out-of-bound transformations, i.e., transformations with arbitrary parameters that are out of the range in the training process. Our model can generalize well to new datasets as the learned flow transformations are generic and is much more data-efficient. Moreover, as TFT is not tied to fixed size of the input CNN features, it can perform feature transformations for arbitrary size of the input images, while (Reed et al., 2015) cannot. We also show that STN (Jaderberg et al., 2015) cannot be adapted to explicitly taking user-specified parameters to generate images.

In the second and third experiments, we utilize TFT to perform network internal data augmentation and image style transfer, respectively.

Original Image Rotation (30°) Rotation (60°) Rotation (90°) Scaling (x0.9) Scaling (x1.1) Trans-lation Combi-nation
Figure 2: Comparison of transformations on the MNIST and notMNIST dataset. Rotations of and is beyond training settings (from to ). Images generated by DVAM on notMNIST are vague and even lose the pattern when transformation parameters are out-of-bound. Images generated by our model show clear pattern of transformation.

Training and Evaluation on MNIST

We resize each image of dimension 28 x 28 in the MNIST dataset to 44 x 44 by zero-padding the original images so that each image has enough space to perform translation and scaling. For data in the training set, we perform a combination of all three studied transformations to the input images and output the CNN feature maps across different convolutional layers. Specifically, for translation, we perform two dimensional shifting of the input images, with each axis ranging from +7 to -7 pixels, i.e., 225 combinations. For rotation, we perform with 13 different angles, i.e., rotating the input images by

, , , , , clockwise and counterclockwise as well as . For scaling, we choose three factors: 0.9, 1.0 and 1.1 (1.0 indicates no scaling). These four parameters are the top-down transformation parameters in the training set. We form 3-tuples , where is the feature maps of the original images, is the feature maps of the corresponding images after performing spatial transformation parameterized by . Our training set is a collection of these 3-tuples. In practice, we apply the combinations of three transformations with randomly generated parameters from the range specified above.

In our experiments, we use a traditional CNN (with 3 convolutional layers, each is conv+relu+max-pooling and 2 fully connected layers) pre-trained on MNIST. We train three TFTs for the three convolutional layers, respectively, and use all these conv layers to generate new images. For each TFT, we set the depth of each transformation function as 2, the number of filters of the intermediate layer as 32, the filter size as 5x5, and the number of branhces

as 8.

The learning process of our TFT is supervised and the objective function is simply the Euclidean distance between the generated feature maps and the target ones, as mentioned in section 3.2. We train our networks by 200k steps with a regularizer of coefficient 0.0001. We use the ADAM optimizer (Kingma and Ba, 2014) with learning rate of 0.0001. We set the batch size to be 128.

We also train the networks proposed in (Reed et al., 2015), namely DVAM in the same settings by using the same training set. We compare our graphical results of generated images from MNIST to those of DVAM in Figure 2

. When evaluated on MNIST, our approach outperforms theirs in terms of both feature flow representation and out-of-bound transformations. To further demonstrate our approach’s data efficiency and capability of capturing the underlying feature transformation, we train another DVAM, denoted by DVAM+, in the same settings except (1) we double the rotation parameters in the training data (interval length becomes 2.5 degrees from five degrees on MNIST and (2) triple the number of epochs for training. The out-of-bound transformation of DVAM+ on MNIST (rotation in 60 and 90 degrees) is improved (mSPE, i.e., mean squared pixel error = 0.04438) but still much worse than that of TFT (mSPE = 0.000671). The cross-dataset performance on DVAM for rotation in general (30, 60 and 90 degrees) even drops, as reported in Table


Moreover, we train a STN with the same CNN as above and insert a spatial transformer (same localization net as in ST-CNN-multi in the original paper) before each convolutional layer. On MNIST, we evaluate transformed feature maps by STN by manually setting its parameters in each of the STs. It fails to model well. For example, mSPE is 0.1052 for rotation, much worse than TFT’s 0.000671. STN is indeed not designed for and not easily applicable to TFT’s tasks.

Figure 3: Feature flows of TFT and DVAM (Reed et al., 2015). Flow generated by our model has clear pattern while the flow generated by DVAM does not.

Learned Feature Flow

One critical advantage of our top-down feature transformer is its clear representation of learned feature flow. By using the method described in section 3.5, we compute feature flow fields resulted from our proposed TFT learned from the CNN pre-trained on MNIST. As a comparison, we perform the counterpart from DVAM in (Reed et al., 2015), which has the similar CNN dimensions of the encoder-decoder structure. We perform this flow experiment on MNIST images with rotation and scaling. The learned flow transformations in our model has clearer rotation and scaling deformation patterns than that of (Reed et al., 2015), as in Figure 3.

Out-of-bound Transformations

Another advantage of our model is its robustness to out-of-bound parameters. We train our model on MNIST with rotation parameters ranging from to , yet we test the rotation transformation with and . In comparison, we also perform this experiment on (Reed et al., 2015). We can see that our model succeeds on the out-of-bound transformations while the model in (Reed et al., 2015) fails as illustrated in 3rd and 4th columns of Figure 2.

Input Rotation Scaling Translation Compositional
30 60 90 0.9 1.1 varied varied
Figure 4: Generated images by TFT from the Kimia , MPEG7 , and COIL datasets . Images from different datasets are transformed using top-down transformers learned from the MNIST dataset. This is an inter dataset evaluation which demonstrates the effectiveness of our top-down model being generic and not tied to the training data.

Evaluation on non-MNIST Dataset

Model translation rotation scaling combination
DVAM 0.091315 0.077126 0.056829 0.062878
DVAM+ - 0.08430 - -
Ours 0.001492 0.005311 0.004973 0.008571
Table 1: Mean squared pixel prediction error according to different affine transformations of DVAM and of our model on the notMNIST dataset.
Original Rotation (30°) Rotation (-30°) Scaling (x1.3) Scaling (x0.75) Translation (up 30)
FlowPCA (Gallagher, Tang, and Tu, 2015)
Figure 5: Comparison of transformations on a natural image.

We further test our model and the analogy network (DVAM) in (Reed et al., 2015) trained on the same MNIST dataset to investigate their inter dataset performance. In this experiment, we use our trained TFT based on the feature maps of the same traditional CNN, as discussed in section 5.1. To achieve similar visual and numerical effects to that of the MNIST dataset we normalize the notMNIST dataset using max norm. The results demonstrate that our networks have learned flow transformations of the CNN features that explicitly utilize top-down information and generalize well to new types of data. Numerical results are in Table 1 and visual reconstructions of transformed features are in Figure 2.

Our model can be easily extended to feature maps of images with different sizes while (Reed et al., 2015) fails to do so. We apply the same learned TFT to CNN features of images from Kimia-99 (Sharvit et al., 1998; Belongie, Malik, and Puzicha, 2002), MPEG-7 Shape (Latecki, Lakamper, and Eckhardt, 2000) and COIL-20 (Nene, Nayar, and Murase, 1996) datasets. The generated images are illustrated in Figure 4.

Evaluation on Natural Images

We also apply our learned TFT for natural images. Since the pre-trained network based on MNIST only accepts single channel images, we perform flow transformation to the colored images channel by channel. We compare our results with that of (Gallagher, Tang, and Tu, 2015), illustrated in Figure 5. We use a mask on the input CNN features when applying our method.

Network Internal Data Augmentation

Method top-1 top-5
VGG-16 (Simonyan and Zisserman, 2014) 24.8 7.5
VGG-16 fine-tuned via direct data aug. 24.9 7.9
VGG-16 fine-tuned via TFT 24.4 7.3
Table 2: ImageNet Validation Set Error (in %).

As previously mentioned, the learned TFT can be used to further perform data augmentation inside CNN. In this experiment, we train our proposed TFT on CNN features out of the third pooling layer from the pre-trained VGG-16 (Simonyan and Zisserman, 2014). For this TFT, we set the depth of each transformation function as 2, the number of filters of the intermediate layer as 64, the filter size as 5x5, and the number of branhces as 16. We train the TFT using images synthesized from PASCAL VOC 2012 (Everingham et al., ). In specific, we select 874 segmented images, each with single object approximately positioned in the center; we replace the background with the average color of images from the ImageNet dataset (Russakovsky et al., 2015b) and generate rotation, translation and scaling transformations, similar to the settings in the previous experiments with MNIST. After training the model, we insert the TFT into the corresponding layer of the pre-trained VGG-16 and perform fine-tuning. We sample the top-down spatial control parameters uniformly from and accordingly achieve on-line data augmentation inside the networks. Our fine-tuning takes around 3 epochs for the network internal data augmentation. The results show marginal improvement as reported in Table 2.

We only use a few data for training TFT. And the original VGG-16 model already utilizes other direct data augmentation. In comparison, we perform fine-tuning using direct data augmentation (rotation, scaling and translation). As this creates unfilled holes and artifacts, it results in a slight increase to the top-1 and top-5 validation error. Using TFT provides improvement since TFT studies the intrinsic feature transformation (less artifacts) subject to the top-down transformation.

Image Style Transfer

Style 1 Style 2
Input Transfer: 1 Transfer: 2 Transfer: 1+2
Figure 6: Image style transfer experiments. 1+2 means a combined style of style 1 and 2.

We also adapt TFT as stated in section 3.6 to the task of image style transfer. We select a few style images, and 200 images from MS-COCO (Lin et al., 2014) as content images and generate transformed images as ’ground truth’ by the method in (Li et al., 2017). We build TFT upon relu_3_1 of VGG-19 and leverage the pre-trained Decoder3 in (Li et al., 2017) to generate images from transformed feature maps. We set the depth of each transformation function as 5, the number of filters of the intermediate layers as 512. An illustration is shown in Figure 6 for two individual styles and a combination of them.


We have developed top-down feature transformer (TFT) that learns a top-down generator by studying the internal transformations across CNN layers. The learned transformer is illustrated on both within and across datasets which demonstrates its clear advantage over models that are heavily learned through data-driven techniques. TFT points to a promising direction within the study of a CNN’s internal representation and top-down processes.


  • Belongie, Malik, and Puzicha (2002) Belongie, S.; Malik, J.; and Puzicha, J. 2002. Shape matching and object recognition using shape contexts. TPAMI 24(4):509–522.
  • Blake and Yuille (1993) Blake, A., and Yuille, A. 1993. Active vision.
  • Cootes, Edwards, and Taylor (2001) Cootes, T. F.; Edwards, G. J.; and Taylor, C. J. 2001. Active appearance models. IEEE Transactions on pattern analysis and machine intelligence 23(6):681–685.
  • Dodwell (1983) Dodwell, P. C. 1983. The lie transformation group model of visual perception. Perception & Psychophysics 34(1):1–16.
  • Dosovitskiy et al. (2015) Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas, C.; Golkov, V.; van der Smagt, P.; Cremers, D.; and Brox, T. 2015. Flownet: Learning optical flow with convolutional networks. In ICCV.
  • Dosovitskiy, Springenberg, and Brox (2015) Dosovitskiy, A.; Springenberg, J. T.; and Brox, T. 2015. Learning to generate chairs with convolutional neural networks. In CVPR.
  • Elman (1991) Elman, J. L. 1991. Distributed representations, simple recurrent networks, and grammatical. Machine Learning 7:195–225.
  • (8) Everingham, M.; Van Gool, L.; Williams, C. K. I.; Winn, J.; and Zisserman, A. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results.
  • Gallagher, Tang, and Tu (2015) Gallagher, P. W.; Tang, S.; and Tu, Z. 2015. What happened to my dog in that network: Unraveling top-down generators in convolutional neural networks. arXiv preprint arXiv:1511.07125.
  • Gardner et al. (2015) Gardner, J. R.; Upchurch, P.; Kusner, M. J.; Li, Y.; Weinberger, K. Q.; Bala, K.; and Hopcroft, J. E. 2015. Deep manifold traversal: Changing labels with convolutional features. In ECCV.
  • Gatys, Ecker, and Bethge (2015) Gatys, L. A.; Ecker, A. S.; and Bethge, M. 2015. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576.
  • Gibson (2002) Gibson, J. J. 2002. A theory of direct visual perception. Vision and Mind: selected readings in the philosophy of perception 77–90.
  • Girshick et al. (2014) Girshick, R.; Donahue, J.; Darrell, T.; and Malik, J. 2014. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In CVPR.
  • Goodfellow et al. (2014) Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. In NIPS.
  • Gregor et al. (2015) Gregor, K.; Danihelka, I.; Graves, A.; Rezende, D.; and Wierstra, D. 2015. Draw: A recurrent neural network for image generation. In ICML.
  • Gregory (1980) Gregory, R. L. 1980. The intelligent eye. Weidenfeld adn Nicolson.
  • Grenander (1993) Grenander, U. 1993. General pattern theory-A mathematical study of regular structures. Clarendon Press.
  • He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In CVPR.
  • Hill and Johnston (2007) Hill, H., and Johnston, A. 2007. The hollow-face illusion: Object-specific knowledge, general assumptions or properties of the stimulus? 36:199–223.
  • Hinton, Osindero, and Teh (2006) Hinton, G. E.; Osindero, S.; and Teh, Y. W. 2006. A fast learning algorithm for deep belief nets. Neural computation 18:1527–1554.
  • Hochreiter and Schmidhuber (1997) Hochreiter, S., and Schmidhuber, J. 1997. Long short-term memory. Neural Computation.
  • Jaderberg et al. (2015) Jaderberg, M.; Simonyan, K.; Zisserman, A.; et al. 2015. Spatial transformer networks. In NIPS.
  • Kersten, Mamassian, and Yuille (2004) Kersten, D.; Mamassian, P.; and Yuille, A. 2004.

    Object perception as bayesian inference.

    Annual Review of Psychology 55(1):271–304.
  • Kingma and Ba (2014) Kingma, D., and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  • Krizhevsky, Sutskever, and Hinton (2012) Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In NIPS.
  • Kulkarni et al. (2015) Kulkarni, T. D.; Whitney, W. F.; Kohli, P.; and Tenenbaum, J. 2015. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, 2539–2547.
  • Latecki, Lakamper, and Eckhardt (2000) Latecki, L. J.; Lakamper, R.; and Eckhardt, T. 2000. Shape descriptors for non-rigid shapes with a single closed contour. In CVPR, volume 1, 424–429.
  • Li et al. (2017) Li, Y.; Fang, C.; Yang, J.; Wang, Z.; Lu, X.; and Yang, M. 2017. Universal style transfer via feature transforms. CoRR abs/1705.08086.
  • Lin and Lucey (2016) Lin, C.-H., and Lucey, S. 2016. Inverse compositional spatial transformer networks. arXiv preprint arXiv:1612.03897.
  • Lin et al. (2014) Lin, T.; Maire, M.; Belongie, S. J.; Bourdev, L. D.; Girshick, R. B.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; and Zitnick, C. L. 2014. Microsoft COCO: common objects in context. CoRR abs/1405.0312.
  • Liu, Yuen, and Torralba (2011) Liu, C.; Yuen, J.; and Torralba, A. 2011. Sift flow: Dense correspondence across scenes and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(5):978–994.
  • Long, Shelhamer, and Darrell (2015) Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. CVPR.
  • Mahendran and Vedaldi (2015) Mahendran, A., and Vedaldi, A. 2015. Understanding deep image representations by inverting them. In CVPR.
  • Marr (2010) Marr, D. 2010. Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. MIT Press.
  • Nene, Nayar, and Murase (1996) Nene, S. A.; Nayar, S. K.; and Murase, H. 1996. Columbia object image library (coil-20. Technical report.
  • Reed et al. (2015) Reed, S. E.; Zhang, Y.; Zhang, Y.; and Lee, H. 2015. Deep visual analogy-making. In NIPS.
  • Ridley Stroop (1992) Ridley Stroop, J. 1992. Studies of interference in serial verbal reactions. 121:15–23.
  • Russakovsky et al. (2015a) Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A. C.; and Fei-Fei, L. 2015a. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115(3):211–252.
  • Russakovsky et al. (2015b) Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; Berg, A. C.; and Fei-Fei, L. 2015b. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115(3):211–252.
  • Sharvit et al. (1998) Sharvit, D.; Chan, J.; Tek, H.; and Kimia, B. B. 1998. Symmetry-based indexing of image databases. In Content-Based Access of Image and Video Libraries, 1998. Proceedings. IEEE Workshop on, 56–62. IEEE.
  • Simonyan and Zisserman (2014) Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556.
  • Tu et al. (2003) Tu, Z.; Chen, X.; Yuille, A. L.; and Zhu, S. C. 2003. Image parsing: unifying segmentation, detection, and recognition. In ICCV.
  • Wu et al. (2010) Wu, Y. N.; Si, Z.; Gong, H.; and Zhu, S.-C. 2010. Learning active basis model for object detection and recognition. International journal of computer vision 90(2):198–235.
  • Wu et al. (2016) Wu, J.; Xue, T.; Lim, J. J.; Tian, Y.; Tenenbaum, J. B.; Torralba, A.; and Freeman, W. T. 2016. Single image 3d interpreter network. In ECCV.
  • Wu et al. (2017) Wu, W.; Kan, M.; Liu, X.; Yang, Y.; Shan, S.; and Chen, X. 2017.

    Recursive spatial transformer (rest) for alignment-free face recognition.

    In ICCV.
  • Xie and Tu (2015) Xie, S., and Tu, Z. 2015. Holistically-nested edge detection. In ICCV.
  • Yuille, Hallinan, and Cohen (1992) Yuille, A. L.; Hallinan, P. W.; and Cohen, D. S. 1992. Feature extraction from faces using deformable templates. International journal of computer vision 8(2):99–111.
  • Zhang et al. (2017) Zhang, Q.; Cao, R.; Wu, Y. N.; and Zhu, S.-C. 2017. Growing interpretable part graphs on convnets via multi-shot learning. In AAAI.
  • Zhu, Mumford, and others (2007) Zhu, S.-C.; Mumford, D.; et al. 2007. A stochastic grammar of images. Foundations and Trends® in Computer Graphics and Vision 2(4):259–362.