1 Introduction
In machinelearning, solving a classification task typically consists of finding a function , from a rather large data space to a much smaller space of labels . Such a function will therefore necessarily be invariant to a lot of transformations of its input data. It is now clear that being able to characterize such transformations can greatly help the learning procedure, one of the most striking examples being perhaps the use of convolutional neural networks (CNN) for image classification (Krizhevsky et al., 2012), with builtin translation invariance via convolutions and subsequent pooling operations. But as a convolutional layer is essentially a fully connected layer with a constraint tying some of its weights together (LeCun et al., 1995), one could expect other invariances to be encoded in its weights after training. Indeed, from an empirical perspective, CNNs have been observed to naturally learn more invariant features with depth (Goodfellow et al., 2009; Lenc & Vedaldi, 2015), and from a theoretical perspective, it has been proven that under some conditions satisfied by the weights of a convolutional layer, this layer could be reindexed as performing a convolution over a bigger group of transformations than only translations (Mallat, 2016).
It is exciting to note that there has recently been a lot of interest in theoretically extending such successful invariant computational structures to general groups of transformations, notably with group invariant scattering operators (Mallat, 2012), deep symmetry networks (Gens & Domingos, 2014), group invariant signal signatures (Anselmi et al., 2015), group invariant kernels (Mroueh et al., 2015) and group equivariant convolutional networks (Cohen & Welling, 2016)
. However, practical applications of these have mostly remained limited to linear and affine transformations. Indeed, it is a challenge in itself to parametrize more complicated, nonlinear transformations preserving labels, especially as they need to depend on the dataset. In this work, we seek to answer this fundamental question:
What invariances in the data has a CNN learned during its training on a classification task and how can we extract and parameterize them?
The following is a brief summary of our method: Considering an already trained CNN on a labeled dataset, we train a generative adversarial network (GAN) (Goodfellow et al., 2014) to produce filters of a given layer of this CNN, such that the filters’ convolution output be indistinguishable from the one obtained with the real CNN. We combine this with an InfoGAN (Chen et al., 2016) discriminator to prevent the generator from producing always the same filters. As a result, the generator provides us with a smooth, datadependent, nontrivial parametrization of the set of filters of this CNN, characterizing complicated transformations irrelevant for this classification task. Finally, we describe how to visualize what these smooth transformations of the filters would correspond to in the image space.
2 Background
2.1 Generative Adversatial Networks
A Generative Adversarial Network (Goodfellow et al., 2014) consists of two players, the generator and the discriminator , playing a minimax game in which tries to produce samples indistinguishable from some given true distribution , and tries to distinguish between real and generated samples. typically maps a random noise to a generated sample , transforming the noise distribution into a distribution supposed to match . The objective function of this minimax game with maximum likelihood is given by
The noise space, input of the generator, is also called its latent space.
2.2 Information Maximizing Generative Adversarial Nets
In an InfoGAN (Chen et al., 2016), the generator takes as input not only the noise but also another variable , called the latent code. The aim is to make the generated samples depend on in a structured way, for instance by choosing independent ’s, modelling as . In order to avoid a trivial correspondence between and , the InfoGAN procedure maximizes the mutual information .
The mutual information
between two random variables, defined with an entropy
as is symmetric, measures the amount of information that is known about the value of one random variable when the value of the other one is known, and is equal to zero when they are independent. Hence, maximizing the mutual information prevents from being independent of .In practice, is indirectly maximized using a variational lower bound
where approximates and
The minimax game becomes
where
is a hyperparameter controlling the mutual information regularization.
3 Extracting Invariances by Learning Filters
Let be an already trained CNN, whose layer representation of an image will be denoted by , with . As our goal is to learn what kind of filters such a CNN would learn at layer , it could be tempting to simply train a GAN to match the distribution of filters of this CNN’s layer. However, this set of filters is way too small of a dataset to train a GAN , which would cause the generator to massively overfit instead of extracting the smooth, hidden structure lying behind our discrete set of filters. To cope with this problem, instead of feeding the discriminator alternatively with filters produced by and real filters from , we propose to feed with the activations of these filters caused by the data passing through the CNN, i.e. alternatively with or , corresponding respectively to real and fake samples. Here, is an image sampled from data, is sampled from the latent space of and is the activation obtained by passing through each layer of but while replacing the filters of the layer by .
In short, in each step, the generator produces a set of filters for the layer of the CNN. Next, different samples of data are passed through one CNN using its real filters and through the same CNN, but having its layer filters replaced by the fake filters produced by . The discriminator will then try to guess if the activation it is fed was produced using real or generated filters at the layer, while the generator will try to produce filters making the subsequent activations indistinguishable from some obtained with real filters.
However, even though this formulation allows us to train the GAN on a dataset of reasonable size, saving us from an otherwise unavoidable overfitting, could a priori still always produce the same set of filters to fool . Ideally it simply reproduces the real filters of . To overcome this problem, we augment our model by an InfoGAN discriminator whose goal will be to predict which noise was used to produce . This prevents from always producing the same filters, by preventing and from being indenpendent random variables. Note that, just as the GAN discriminator, the InfoGAN discriminator does not act directly on the output of  the filters  but on the activation output that these filters produce.
In this setting, the noise of the generator plays the role of the latent code of the InfoGAN. As in the original InfoGAN paper (Chen et al., 2016), we train a neural network to predict the latent code , which shares all its layers but the last one with the discriminator . Finally, by modelling the latent codes as independent gaussian random variables, the term in the variational bound being a loglikelihood, it is actually given by an reconstruction error. The joint training of these three neural networks is described in Algorithm 1 and illustrated in Figure 1.
The gradientbased updates can use any standard gradientbased learning rule. We used RMSprop in our experiments.
Minibatch stochastic gradient descent training of
, and .4 Visualizing the learned transformations
Using our method, we can parameterize the filters of a trained CNN and thus characterize its learned invariances. But in order to assess what has actually been learned, we need a way to visualize these invariances once the GAN has been trained. More specifically, given a data sample , we would like to know what transformations of the CNN regards as being invariant. We do this in the following manner:
We take some latent noise vector
and obtain its generated filters . Using those filters, we pass the data sample through the network to obtain , which we call the activation profile of given .Next we select two dimensions and of at random and construct a grid of noise vectors by moving around in the dimensions and in a small neighborhood.
For each , we use Gradient Descent to start from and find the data point that gives the same activation profile for the filters generated using as the data point gave for the filters generated using . Formally, for each we want to find , s.t.
where is a regularizer corresponding to a natural image prior
. Specifically, we use the loss function proposed in
(Mahendran & Vedaldi, 2015).By using Gradient Descent and starting from the original data point , we make sure that the solution we find is likely in the neighborhood of , i.e. can be obtained by applying a small transformation to .
As a result, from our grid of vectors, we obtain a grid of points. This grid in data space represents a manifold traversal in a small neighborhood on the manifold of learned invariances. If our method is successful, we expect to see sensible continuous transformations of the original data point along the axes of this grid.
5 Experimental Results
We apply our method of extracting invariances on a convolutional neural network trained on the MNIST dataset. In particular, we train a standard architecture featuring 5 convolutional layers with ReLU nonlinearities, maxpooling and batch normalization for 10 epochs on the 10class classification task.
Once converged, we use our GAN approach to learn the filters of the 4th convolutional layer in the CNN. Since this is one of the last layers in the network, we expect the invariances that we extract to be very highlevel and highly nonlinear.
5.1 Visualizing the learned invariances
The results can be seen in Figure 3 and a sample of the learned filters themselves can be seen in Figure 2. Our expectations are clearly met as the resulting outputs are in fact an ensemble of highly nonlinear and highlevel transformations of the data. Even more visualizations can be found in the Appendix.
We further hypothesize that if we apply the same method to the filters of one of the first layers in the network, the transformations that we learn will be much more lowlevel and more pixellocal. To test this, we use our method on the same CNN’s second convolutional layer. The results can be seen in Figure 4. As expected, the transformations are much more lowlevel, such as simple brightness changes or changes to the stroke width.
5.2 Assessing the quality of the generator
In order to assess the quality of the generator, we need to be sure that: (i) filters produced by the generator would yield a good accuracy on the original classification task of our CNN, and (ii) the generator can produce a variety of different filters for different latent noises.
For the first part, we randomly drew noise vectors , computed the corresponding set of filters for each of them, and then each data sample , after going through , is passed through each of these 10 layer and averaged over them, so that the signal fed to the next layer becomes:
all the next layers being retrained. This averaging can be seen as an average pooling w.r.t. the transformations defined by the generator, which, if the transformations we learned were indeed irrelevant for the classification task, should not induce any loss in accuracy. Our expectations are confirmed, as the test accuracy obtained by following the above procedure is of 0.982, against a test accuracy of 0.971 for the real CNN.
As for the second part, Figure 5 shows a MultiDimensional Scaling (MDS) of both the original set of filters of , and of generated filters for randomly sampled noise vectors. We observe that different noise vectors produce a variety of different filters, which confirms that the generator has not overfitted on the set of real filters. Further, since the generator has learned to produce a variety of filters for each real filter, all the while retaining its classification accuracy, this means that we have truly captured the invariances of the data with regard to the CNN’s classification task.
6 Conclusion and future work
Introducing an invariance to irrelevant transformations for a given task is known to constitute a good statistical bias, such as translations in CNNs. Although a lot of work has already been done regarding how to implement known invariances into a computational structure, practical applications of these mostly include very simple linear or affine transformations. Indeed, characterizing more complicated transformations seems to be a challenge in itself.
In this work, we provided a tool allowing to extract transformations w.r.t. which a CNN has been trained to be invariant to, in such a way that these transformations can be both visualized in the image space, and potentially reused in other computational structures, since they are parametrized by a generator. The generator has been shown to extract a smooth hidden structure lying behind the discrete set of possible filters. It is the first time that a method is proposed to extract the symmetries learned by a CNN in an explicit, parametrized manner.
Applications of this work are likely to include transferlearning and data augmentation. Future work could apply this method to colored images. As suggested by the last subsection, the parametrization of such irrelevant transformations of the set of filters could also potentially define another type of powerful pooling operation.
References
 Anselmi et al. (2015) Fabio Anselmi, Joel Z Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio. Unsupervised learning of invariant representations in hierarchical architectures. Theoret. Comput. Sci., dx.doi.org/10.1016/j.tcs.2015.06.048, 2015.
 Chen et al. (2016) Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172–2180, 2016.
 Cohen & Welling (2016) Taco Cohen and Max Welling. Group equivariant convolutional networks. In International Conference on Machine Learning, pp. 2990–2999, 2016.
 Gens & Domingos (2014) Robert Gens and Pedro M Domingos. Deep symmetry networks. In Advances in neural information processing systems, pp. 2537–2545, 2014.
 Goodfellow et al. (2009) Ian Goodfellow, Honglak Lee, Quoc V Le, Andrew Saxe, and Andrew Y Ng. Measuring invariances in deep networks. In Advances in neural information processing systems, pp. 646–654, 2009.
 Goodfellow et al. (2014) Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
 Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
 LeCun et al. (1995) Yann LeCun, Yoshua Bengio, et al. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.

Lenc & Vedaldi (2015)
Karel Lenc and Andrea Vedaldi.
Understanding image representations by measuring their equivariance
and equivalence.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 991–999, 2015.  Mahendran & Vedaldi (2015) Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188–5196, 2015.
 Mallat (2012) Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65(10):1331–1398, 2012.
 Mallat (2016) Stéphane Mallat. Understanding deep convolutional networks. Phil. Trans. R. Soc. A, 374(2065):20150203, 2016.
 Mroueh et al. (2015) Youssef Mroueh, Stephen Voinea, and Tomaso A Poggio. Learning with group invariant features: A kernel perspective. In Advances in Neural Information Processing Systems, pp. 1558–1566, 2015.
Comments
There are no comments yet.