Generic Deep Networks with Wavelet Scattering

12/20/2013 ∙ by Edouard Oyallon, et al. ∙ 0

We introduce a two-layer wavelet scattering network, for object classification. This scattering transform computes a spatial wavelet transform on the first layer and a new joint wavelet transform along spatial, angular and scale variables in the second layer. Numerical experiments demonstrate that this two layer convolution network, which involves no learning and no max pooling, performs efficiently on complex image data sets such as CalTech, with structural objects variability and clutter. It opens the possibility to simplify deep neural network learning by initializing the first layers with wavelet filters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Supervised training of deep convolution networks (LeCun et al., 1998)

is clearly highly effective for image classification, as shown by results on ImageNet

(Krizhevsky et al., 2012)

. The first layers of the networks trained on ImageNet also perform very well to classify images in very different databases, which indicates that these layers capture generic image information

(Zeiler & Fergus, 2013; Girshick et al., 2013; Donahue et al., 2013). This paper shows that such generic properties can be captured by a scattering transform, which has the capability to build invariants to affine transformations. Scattering transforms compute hierarchical invariants along groups of transformations by cascading wavelet convolutions and modulus non-linearities, along the group variables (Mallat, 2012).

Invariant scattering transforms to translations (Bruna & Mallat, 2013) and rotation-translations (Sifre & Mallat, 2012) have previously been applied to digit recognition and texture discrimination. The main source of variability of these images are due to deformations and stationary stochastic variability. This paper applies a scattering transform to CalTech-101 and CalTech-256 datasets, which include much more complex structural variability of objects and clutter, with good classification results. The scattering transform is adapted to the important source of variability of these images, by applying wavelet transforms along spatial, rotation, and scaling variables, with separable convolutions which is computationally efficient and by considering YUV color channels independently.

It differs to most deep network in two respects. No ad-hoc renormalization is added within the network and no max pooling is performed. All poolings are average pooling, which guarantees the mathematical stability of the representation. It does not affect the quality of results when applied to wavelets, even in cluttered environments. This study concentrates on two layers because adding a third layer of wavelet coefficients did not reduce classification errors. Beyond the first two layers, which take care of translation, rotation and scaling variability, seems necessary to learn the filters involved in the third and next layers to improve classification performances.

2 Scattering along Translations, Rotations and Scales

A two-layer scattering transform is computed by cascading wavelet transforms and modulus non-linearities. The first wavelet transform filters the image with a low-pass filter and complex wavelets which are scaled and rotated. The low-pass filter outputs an averaged image and the modulus of each complex coefficients defines the first scattering layer . A second wavelet transform applied to computes an average and the next layer . A final averaging computes second order scattering coefficients , as illustrated in Figure 1. Higher order scattering coefficients are not computed.


Figure 1: A scattering representation is computed by successively computing the modulus of wavelet coefficients with , followed by an average pooling .

The first wavelet transform is defined from a mother wavelet , which is a complex Morlet function (Bruna & Mallat, 2013) well localized in the image plane and a gaussian lowpass filter . The wavelet is scaled by , where is an integer or half-integer, and rotated by for :

where . The averaging filter is .

This wavelet transform first computes the average of and we compute the modulus of the complex wavelet coefficients:

The spatial variable is subsampled by . We write the aggregated variable which indexes these first layer coefficients.

The next layer is computed with a second wavelet transform which convolves with separable wavelets along the spatial, rotation and scale variables

The index specifies the angle of rotation , the scales , and of these wavelets. We choose this wavelet family so that it defines a tight frame, and hence an invertible linear operator which preserves the norm. The wavelet family in angle and scales also includes the necessary averaging filters. The next layer of coefficients are defined for and by

First order order scattering coefficients are given by and the second order scattering coefficients are computed with only a low-pass filtering from the second layer. As opposed to almost all deep networks (Krizhevsky et al., 2012)

, we found that no non-linear normalization was needed to obtain good classification results with a scattering transform. As usual, at the classification stage, all coefficients are standardized by setting their variance to

.

A locally invariant translation scattering representation is obtained by concatenating the scattering coefficients of different orders :

Each spatial averaging by is subsampled at intervals .

3 Numerical Classification Results

The classification performance of this double layer wavelet scattering representation is evaluated on the Caltech databases. All images are first renormalized to a fixed size of by pixels by a linear rescaling. Each YUV channel of each image is computed separately and their scattering coefficients are concatenated. The first wavelet transform is computed with Morlet wavelets (Bruna & Mallat, 2013), over octaves with angles . The second wavelet transform is still computed with Morlet wavelets over a range of spatial scales and angles . We also use a Morlet wavelet over the angle variable, calculated over octaves . In this implementation, we did not use a wavelet along the scale variable .

The final scattering coefficients are computed with a spatial pooling at a scale , as opposed to the maxima selection used in most convolution networks. These coefficients are renormalized by a standardization which subtracts their mean and sets their variance to . The mean and variance are computed on the training databases. Standardized scattering coefficients are then provided to a linear SVM classifier.


Dataset Layers Calt.-101 Calt.-256
Scattering 1 51.20.8 19.30.2
ImageNet CCN 1 44.8 0.7 24.60.4
Scattering 2 68.8 0.5 34.60.2
ImageNet CCN 2 66.2 0.5 39.60.3
ImageNet CCN 3 72.3 0.4 46.00.3
ImageNet CCN 7 85.5 0.4 72.60.2
Table 1: Classification accuracies of convolution networks on Caltech-101 and Caltech-256 using respectively 30 and 60 samples per class, depending upon the number of layers, for a scattering transform and a network trained on ImageNet (Zeiler & Fergus, 2013).

Almost state of the art classification results are obtained on Caltech-101 and Caltech-256, with a ConvNet(Zeiler & Fergus, 2013) pretrained on ImageNet. Table 1 shows that with 7 layers it has an 85.5% accuracy on Caltech-101 and 72.6% accuracy on Caltech-256, using respectively 30 and 60 training images. The classification is performed with a linear SVM. In this work, we concentrate on the first two layers. With only two layers, the ConvNet performances drop to on Caltech-101 and on Caltech-256, and progressively increase as the number of layers increases. A scattering transform has similar performances as a ConvNet when restricted to and layers, as shown by Table 1. It indicates that major sources of classification improvements over these first two layers can be obtained with wavelet convolutions over spatial variables on the first layer, and joint spatial and rotation variables on the second layer. Color improves by 1.5% our results on Caltech-101 but it has not been tried yet on Caltech-256, whose results are given with gray-level images. More improvements can potentially be obtained by adjusting the wavelet filtering along scale variables.

Locality-constrained Linear Coding(LLC) (Wang et al., 2010)

are examples different two layer architectures, with a first layer computing SIFT feature vectors and a second layer which uses an unsupervised dictionary optimization. This algorithm performs a max-pooling followed by an SVM. LLC yields performances up to 73.4% on Caltech-101 and 47.7% on Caltech-256

(Wang et al., 2010). These results are better than the one obtained with fixed architectures such as the one presented in this paper, because the second layer performs an unsupervised optimization adapted to the data set.

In this work we observed that average-pooling can provide competitive and even better results that max-polling. According to some analysis and numerical experiments performed on sparse features (Boureau et al., 2010), max-pooling should perform better than average-pooling. Caltech images are piecewise regular so we do have sparse wavelets coefficients. It however seems that using the modulus on complex wavelet coefficients does improves results obtained with average pooling compared to max-pooling. When using real wavelets and an absolute value non-linearity, average and max pooling achieve similar performances. The max-pooling was implemented with and without overlapping windows. Using squared windows of length , with a max-pooling we obtained 60.5% of accuracy on Caltech-101 without overlapping windows and 62.0% with overlapping windows. This accuracy is well below the average pooling results presented in Table 1. With real Haar wavelets, we observed that max-pooling and average pooling perform similarly with respectively 66.9% and 67.0% accuracy on Caltech-101. These difference behaviors with real and complex wavelets are still not well understood.

4 Conclusion

We showed that a two layer scattering convolution network, which involves no learning, provides similar accuracy on the Caltech databases, as double layers neural network pretrained on ImageNet. This scattering transform linearizes the variability relatively to translations and rotations and provides invariants to translations with an average pooling. It involves no inner renormalization but a standardisation of output coefficients.

Many further improvements can still be brought to these first scattering layers, in particular by optimizing the scaling invariance. These preliminary experiments indicate that wavelet scattering transforms provide a good approach to understand the first two layers of convolution networks for complex image classification, and an efficient initialization of these two layers.

References