Parametrizing filters of a CNN with a GAN

10/31/2017 ∙ by Yannic Kilcher, et al. ∙ ETH Zurich 0

It is commonly agreed that the use of relevant invariances as a good statistical bias is important in machine-learning. However, most approaches that explicitly incorporate invariances into a model architecture only make use of very simple transformations, such as translations and rotations. Hence, there is a need for methods to model and extract richer transformations that capture much higher-level invariances. To that end, we introduce a tool allowing to parametrize the set of filters of a trained convolutional neural network with the latent space of a generative adversarial network. We then show that the method can capture highly non-linear invariances of the data by visualizing their effect in the data space.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

page 8

page 11

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In machine-learning, solving a classification task typically consists of finding a function , from a rather large data space to a much smaller space of labels . Such a function will therefore necessarily be invariant to a lot of transformations of its input data. It is now clear that being able to characterize such transformations can greatly help the learning procedure, one of the most striking examples being perhaps the use of convolutional neural networks (CNN) for image classification (Krizhevsky et al., 2012), with built-in translation invariance via convolutions and subsequent pooling operations. But as a convolutional layer is essentially a fully connected layer with a constraint tying some of its weights together (LeCun et al., 1995), one could expect other invariances to be encoded in its weights after training. Indeed, from an empirical perspective, CNNs have been observed to naturally learn more invariant features with depth (Goodfellow et al., 2009; Lenc & Vedaldi, 2015), and from a theoretical perspective, it has been proven that under some conditions satisfied by the weights of a convolutional layer, this layer could be re-indexed as performing a convolution over a bigger group of transformations than only translations (Mallat, 2016).

It is exciting to note that there has recently been a lot of interest in theoretically extending such successful invariant computational structures to general groups of transformations, notably with group invariant scattering operators (Mallat, 2012), deep symmetry networks (Gens & Domingos, 2014), group invariant signal signatures (Anselmi et al., 2015), group invariant kernels (Mroueh et al., 2015) and group equivariant convolutional networks (Cohen & Welling, 2016)

. However, practical applications of these have mostly remained limited to linear and affine transformations. Indeed, it is a challenge in itself to parametrize more complicated, non-linear transformations preserving labels, especially as they need to depend on the dataset. In this work, we seek to answer this fundamental question:

What invariances in the data has a CNN learned during its training on a classification task and how can we extract and parameterize them?

The following is a brief summary of our method: Considering an already trained CNN on a labeled dataset, we train a generative adversarial network (GAN) (Goodfellow et al., 2014) to produce filters of a given layer of this CNN, such that the filters’ convolution output be indistinguishable from the one obtained with the real CNN. We combine this with an InfoGAN (Chen et al., 2016) discriminator to prevent the generator from producing always the same filters. As a result, the generator provides us with a smooth, data-dependent, non-trivial parametrization of the set of filters of this CNN, characterizing complicated transformations irrelevant for this classification task. Finally, we describe how to visualize what these smooth transformations of the filters would correspond to in the image space.

2 Background

2.1 Generative Adversatial Networks

A Generative Adversarial Network (Goodfellow et al., 2014) consists of two players, the generator and the discriminator , playing a minimax game in which tries to produce samples indistinguishable from some given true distribution , and tries to distinguish between real and generated samples. typically maps a random noise to a generated sample , transforming the noise distribution into a distribution supposed to match . The objective function of this minimax game with maximum likelihood is given by

The noise space, input of the generator, is also called its latent space.

2.2 Information Maximizing Generative Adversarial Nets

In an InfoGAN (Chen et al., 2016), the generator takes as input not only the noise but also another variable , called the latent code. The aim is to make the generated samples depend on in a structured way, for instance by choosing independent ’s, modelling as . In order to avoid a trivial correspondence between and , the InfoGAN procedure maximizes the mutual information .

The mutual information

between two random variables, defined with an entropy

as is symmetric, measures the amount of information that is known about the value of one random variable when the value of the other one is known, and is equal to zero when they are independent. Hence, maximizing the mutual information prevents from being independent of .

In practice, is indirectly maximized using a variational lower bound

where approximates and

The minimax game becomes

where

is a hyperparameter controlling the mutual information regularization.

3 Extracting Invariances by Learning Filters

Let be an already trained CNN, whose -layer representation of an image will be denoted by , with . As our goal is to learn what kind of filters such a CNN would learn at layer , it could be tempting to simply train a GAN to match the distribution of filters of this CNN’s layer. However, this set of filters is way too small of a dataset to train a GAN , which would cause the generator to massively overfit instead of extracting the smooth, hidden structure lying behind our discrete set of filters. To cope with this problem, instead of feeding the discriminator alternatively with filters produced by and real filters from , we propose to feed with the activations of these filters caused by the data passing through the CNN, i.e. alternatively with or , corresponding respectively to real and fake samples. Here, is an image sampled from data, is sampled from the latent space of and is the activation obtained by passing through each layer of but while replacing the filters of the -layer by .

In short, in each step, the generator produces a set of filters for the -layer of the CNN. Next, different samples of data are passed through one CNN using its real filters and through the same CNN, but having its -layer filters replaced by the fake filters produced by . The discriminator will then try to guess if the activation it is fed was produced using real or generated filters at the -layer, while the generator will try to produce filters making the subsequent activations indistinguishable from some obtained with real filters.

However, even though this formulation allows us to train the GAN on a dataset of reasonable size, saving us from an otherwise unavoidable overfitting, could a priori still always produce the same set of filters to fool . Ideally it simply reproduces the real filters of . To overcome this problem, we augment our model by an InfoGAN discriminator whose goal will be to predict which noise was used to produce . This prevents from always producing the same filters, by preventing and from being indenpendent random variables. Note that, just as the GAN discriminator, the InfoGAN discriminator does not act directly on the output of - the filters - but on the activation output that these filters produce.

In this setting, the noise of the generator plays the role of the latent code of the InfoGAN. As in the original InfoGAN paper (Chen et al., 2016), we train a neural network to predict the latent code , which shares all its layers but the last one with the discriminator . Finally, by modelling the latent codes as independent gaussian random variables, the term in the variational bound being a log-likelihood, it is actually given by an -reconstruction error. The joint training of these three neural networks is described in Algorithm 1 and illustrated in Figure 1.

Figure 1: Illustration of how the different neural networks interact with each other. CNN layers are depicted in light gray. The flow of data is shown in green, while the generation of the filters by the generative model is shown in red. The discriminator part of the GAN is shown in blue. Note that the discriminator does not have direct access to the generated filters, but can only observe the data after it has passed through them. The CNN is fixed, while , and are trained jointly.
1:for number of training iterations do
2:      Sample minibatch of noise samples from noise prior .
3:      Generate the filters .
4:      Sample minibatch of examples from data distribution .
5:      Pass the data through the CNN with the real and generated filters, to obtain the ’s and ’s respectively.
6:      Feed these to and , letting guess if it was fed or , and letting recover the .
7:      Update the discriminator by ascending its stochastic gradient:
8:      Update the generator by descending its stochastic gradient:
9:      Update the InfoGAN discriminator by descending its stochastic gradient:
end for
10:

The gradient-based updates can use any standard gradient-based learning rule. We used RMSprop in our experiments.

Algorithm 1

Minibatch stochastic gradient descent training of

, and .

4 Visualizing the learned transformations

Using our method, we can parameterize the filters of a trained CNN and thus characterize its learned invariances. But in order to assess what has actually been learned, we need a way to visualize these invariances once the GAN has been trained. More specifically, given a data sample , we would like to know what transformations of the CNN regards as being invariant. We do this in the following manner:

We take some latent noise vector

and obtain its generated filters . Using those filters, we pass the data sample through the network to obtain , which we call the activation profile of given .

Next we select two dimensions and of at random and construct a grid of noise vectors by moving around in the dimensions and in a small neighborhood.

For each , we use Gradient Descent to start from and find the data point that gives the same activation profile for the filters generated using as the data point gave for the filters generated using . Formally, for each we want to find , s.t.

where is a regularizer corresponding to a natural image prior

. Specifically, we use the loss function proposed in

(Mahendran & Vedaldi, 2015).

By using Gradient Descent and starting from the original data point , we make sure that the solution we find is likely in the neighborhood of , i.e. can be obtained by applying a small transformation to .

As a result, from our grid of -vectors, we obtain a grid of -points. This grid in data space represents a manifold traversal in a small neighborhood on the manifold of learned invariances. If our method is successful, we expect to see sensible continuous transformations of the original data point along the axes of this grid.

5 Experimental Results

We apply our method of extracting invariances on a convolutional neural network trained on the MNIST dataset. In particular, we train a standard architecture featuring 5 convolutional layers with ReLU nonlinearities, max-pooling and batch normalization for 10 epochs on the 10-class classification task.

Once converged, we use our GAN approach to learn the filters of the 4th convolutional layer in the CNN. Since this is one of the last layers in the network, we expect the invariances that we extract to be very high-level and highly nonlinear.

5.1 Visualizing the learned invariances

The results can be seen in Figure 3 and a sample of the learned filters themselves can be seen in Figure 2. Our expectations are clearly met as the resulting outputs are in fact an ensemble of highly nonlinear and high-level transformations of the data. Even more visualizations can be found in the Appendix.

We further hypothesize that if we apply the same method to the filters of one of the first layers in the network, the transformations that we learn will be much more low-level and more pixel-local. To test this, we use our method on the same CNN’s second convolutional layer. The results can be seen in Figure 4. As expected, the transformations are much more low-level, such as simple brightness changes or changes to the stroke width.

Figure 2: Learned filters of the CNN’s 4th layer. We summed one third of the orignal channels together in order to visualize the learned filters.
Figure 3: Invariance transformations extracted from the CNN’s 4th layer. The middle sample of each grid represents the original data sample, while the rest of the grid are found by matching the original sample’s activation profile.
Figure 4: Invariance transformations extracted from the CNN’s 2nd layer. The middle sample of each grid represents the original data sample, while the rest of the grid are found by matching the original sample’s activation profile.

5.2 Assessing the quality of the generator

In order to assess the quality of the generator, we need to be sure that: (i) filters produced by the generator would yield a good accuracy on the original classification task of our CNN, and (ii) the generator can produce a variety of different filters for different latent noises.

For the first part, we randomly drew noise vectors , computed the corresponding set of filters for each of them, and then each data sample , after going through , is passed through each of these 10 -layer and averaged over them, so that the signal fed to the next layer becomes:

all the next layers being re-trained. This averaging can be seen as an average pooling w.r.t. the transformations defined by the generator, which, if the transformations we learned were indeed irrelevant for the classification task, should not induce any loss in accuracy. Our expectations are confirmed, as the test accuracy obtained by following the above procedure is of 0.982, against a test accuracy of 0.971 for the real CNN.

As for the second part, Figure 5 shows a Multi-Dimensional Scaling (MDS) of both the original set of filters of , and of generated filters for randomly sampled noise vectors. We observe that different noise vectors produce a variety of different filters, which confirms that the generator has not overfitted on the set of real filters. Further, since the generator has learned to produce a variety of filters for each real filter, all the while retaining its classification accuracy, this means that we have truly captured the invariances of the data with regard to the CNN’s classification task.

Figure 5: Multi-Dimensional Scaling for the filters produced by the GAN. Individual colors represent different samples for the same filter of the true CNN. The large cluster sizes shows that the GAN is producing a wide variety of different filters for each corresponding real filter.

6 Conclusion and future work

Introducing an invariance to irrelevant transformations for a given task is known to constitute a good statistical bias, such as translations in CNNs. Although a lot of work has already been done regarding how to implement known invariances into a computational structure, practical applications of these mostly include very simple linear or affine transformations. Indeed, characterizing more complicated transformations seems to be a challenge in itself.

In this work, we provided a tool allowing to extract transformations w.r.t. which a CNN has been trained to be invariant to, in such a way that these transformations can be both visualized in the image space, and potentially re-used in other computational structures, since they are parametrized by a generator. The generator has been shown to extract a smooth hidden structure lying behind the discrete set of possible filters. It is the first time that a method is proposed to extract the symmetries learned by a CNN in an explicit, parametrized manner.

Applications of this work are likely to include transfer-learning and data augmentation. Future work could apply this method to colored images. As suggested by the last subsection, the parametrization of such irrelevant transformations of the set of filters could also potentially define another type of powerful pooling operation.

References

  • Anselmi et al. (2015) Fabio Anselmi, Joel Z Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio. Unsupervised learning of invariant representations in hierarchical architectures. Theoret. Comput. Sci., dx.doi.org/10.1016/j.tcs.2015.06.048, 2015.
  • Chen et al. (2016) Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172–2180, 2016.
  • Cohen & Welling (2016) Taco Cohen and Max Welling. Group equivariant convolutional networks. In International Conference on Machine Learning, pp. 2990–2999, 2016.
  • Gens & Domingos (2014) Robert Gens and Pedro M Domingos. Deep symmetry networks. In Advances in neural information processing systems, pp. 2537–2545, 2014.
  • Goodfellow et al. (2009) Ian Goodfellow, Honglak Lee, Quoc V Le, Andrew Saxe, and Andrew Y Ng. Measuring invariances in deep networks. In Advances in neural information processing systems, pp. 646–654, 2009.
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
  • LeCun et al. (1995) Yann LeCun, Yoshua Bengio, et al. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
  • Lenc & Vedaldi (2015) Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pp. 991–999, 2015.
  • Mahendran & Vedaldi (2015) Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188–5196, 2015.
  • Mallat (2012) Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65(10):1331–1398, 2012.
  • Mallat (2016) Stéphane Mallat. Understanding deep convolutional networks. Phil. Trans. R. Soc. A, 374(2065):20150203, 2016.
  • Mroueh et al. (2015) Youssef Mroueh, Stephen Voinea, and Tomaso A Poggio. Learning with group invariant features: A kernel perspective. In Advances in Neural Information Processing Systems, pp. 1558–1566, 2015.

Appendix A More Invariance Visualizations

Figure 6: Invariance transformations extracted from the CNN’s 4th layer. The middle sample of each grid represents the original data sample, while the rest of the grid are found by matching the original sample’s activation profile.
Figure 7: Invariance transformations extracted from the CNN’s 4th layer. The middle sample of each grid represents the original data sample, while the rest of the grid are found by matching the original sample’s activation profile.