pyscatwave
Fast Scattering Transform with CuPy/PyTorch
view repo
We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results todate with predefined representations while being competitive with Deep CNNs. Using a shallow cascade of 1 x 1 convolutions, which encodes scattering coefficients that correspond to spatial windows of very small sizes, permits to obtain AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local encoding explicitly learns invariance w.r.t. rotations. Combining scattering networks with a modern ResNet, we achieve a singlecrop top 5 error of 11.4 imagenet ILSVRC2012, comparable to the Resnet18 architecture, while utilizing only 10 layers. We also find that hybrid architectures can yield excellent performance in the small sample regime, exceeding their endtoend counterparts, through their ability to incorporate geometrical priors. We demonstrate this on subsets of the CIFAR10 dataset and on the STL10 dataset.
READ FULL TEXT VIEW PDFFast Scattering Transform with CuPy/PyTorch
None
Image classification is a high dimensional problem that requires building lower dimensional representations that reduce the noninformative images variabilities. For example, some of the main source of variability are often due to geometrical operations such as translations and rotations. An efficient classification pipeline necessarily builds invariants to these variabilities. Deep architectures build representations that lead to stateoftheart results on image classification tasks [13]. These architectures are designed as very deep cascades of nonlinear endtoend learned modules [22]. When trained on largescale datasets they have been shown to produce representations that are transferable to other datasets [42, 15], which indicate they have captured generic properties of a supervised task that consequently do not need to be learned. Indeed several works indicate geometrical structures in the filters of the earlier layers [19, 39] of Deep CNNs. However, understanding the precise operations performed by those early layers is a complicated [38, 26] and possibly intractable task. In this work we investigate if it is possible to replace these early layers, by simpler cascades of nonlearned operators that reduce variability while retaining discriminative information.
Indeed, there can be several advantages to incorporating predefined geometric priors, via a hybrid approach of combining predefined and learned representations. First, endtoend pipelines can be data hungry and ineffective when the number of samples is low. Secondly, it could permit to obtain more interpertable classification pipelines which are amenable to analysis. Finally, it can reduce the spatial dimensions and the required depth of the learned modules.
A potential candidate for an image representation is the SIFT descriptor [23] that was widely used before 2012 as a feature extractor in classification pipelines [30, 31]
. This representation was typically encoded via an unsupervised Fisher Vector (FV) and fed to a linear SVM. However, several works indicate that this is not a generic enough representation to build further modules on top of
[21, 2]. Indeed endtoend learned features produce substantially better classification accuracy. A major improvement over SIFT can be found in the scattering transform [24, 6, 33], which is a type of deep convolutional network, which permits to retain discriminative information normally discarded by methods like SIFT while introducing geometric invariances and stability. Scattering transforms have been shown to already produce representations that lead to the top results on complex image datasets when compared to other unsupervised representations (even learned ones) [27]. This makes them an excellent candidate for the initial layers of a deep network. We thus investigate the use of scattering as a generic representation to combine with deep neural networks.
Related to our work [28]
proposed a hybrid representation for large scale image recognition combining a predefined representation and Neural Networks (NN), that uses Fisher Vector encoding of SIFT and leverages NNs as scalable classifiers. In contrast we use the scattering transform in combination with convolutional architectures. Our main contributions are as follows: First, we demonstrate that using supervised local descriptors, obtained by shallow
convolutions, with very small spatial window sizes permits to obtain AlexNet accuracy on the imagenet classification task (Subsection 2.3). We show empirically these encoders build explicit invariance to local rotations (Subsection 3.2). Second, we propose hybrid networks that combine scattering with modern CNNs (Section 4) and show that using scattering and a ResNet of reduced depth, we obtain similar accuracy to ResNet18 on Imagenet (Subsection 4.1). Finally, we demonstrate in Subsection 4.3 that scattering permits a substantial improvement in accuracy in the setting of limited data.Our highly efficient GPU implementation of the scattering transform is, to our knowledge, orders of magnitude faster than any other implementations, and allows training very deep networks applying scattering on the fly. Our scattering implementation ^{1}^{1}1http://github.com/edouardoyallon/pyscatwave and pretrained hybrid models ^{2}^{2}2http://github.com/edouardoyallon/scalingscatteringare available.
We introduce the scattering transform and motivate its use as a generic input for supervised tasks. A scattering network belongs to the class of CNNs whose filters are fixed as wavelets [27]. The construction of this network has strong mathematical foundations [24], meaning it is well understood, relies on few parameters and is stable to a large class of geometric transformations. In general, the parameters of this representation do not need to be adapted to the bias of the dataset [27], making its output a suitable generic representation.
We then propose and motivate the use of supervised CNNs built on top of the scattering network. Finally we propose a supervised encodings of scattering coefficients using 1x1 convolutions, that can retain interpertability and locality properties.
In this section, we recall the definition of the scattering transform. Consider a signal , with the spatial position index and an integer , which is the spatial scale of our scattering transform. Let be a local averaging filter with a spatial window of scale (here, a Gaussian smoothing function). Applying the local averaging operator, we obtain the zeroth order scattering coefficient, . This operation builds an approximate invariant to translations smaller than , but it also results in a loss of high frequencies that are necessary to discriminate signals.
A solution to avoid the loss of high frequency information is provided by the use of wavelets. A wavelet is an integrable and localized function in the Fourier and space domain, with zero mean. A family of wavelets is obtained by dilating a complex mother wavelet (here, a Morlet wavelet) such that , where is the rotation by , and is the scale of the wavelet. A given wavelet has thus its energy concentrated at a scale , in the angular sector . Let be an integer parametrizing a discretization of . A wavelet transform is the convolution of a signal with the family of wavelets introduced above, with an appropriate downsampling:
Observe that and have been discretized: the wavelet is chosen to be selective in angle and localized in Fourier. With appropriate discretization [27], is approximatively an isometry on the set of signals with limited bandwidth, and this implies the energy of the signal is preserved. This operator then belongs to the category of multiresolution analysis operators, each filter being excited by a specific scale and angle, but with the output coefficients not being invariant to translation. To achieve invariance we can not apply to since it gives a trivial invariant, namely zero.
To tackle this issue, we apply a nonlinear pointwise complex modulus to , followed by an averaging , which builds a non trivial invariant. Here, the mother wavelet is analytic, thus is regular [1] which implies that the energy in Fourier of is more likely to be contained in a lower frequency domain than . Thus, preserves more energy of . It is possible to define , which can also be written as: ; this is the first order scattering coefficients. Again, the use of the averaging builds an invariant to translation up to .
Once more, we apply a second wavelet transform , with the same filters as , on each channel. This permits the recovery of the highfrequency lost due to the averaging applied to the first order, leading to , which can also be written as . We only compute increasing paths, e.g. because nonincreasing paths have been shown to bear no energy [6]. We do not compute higher order scatterings, because their energy is negligible [6]. We call the final scattering coefficient corresponding to the concatenation of the order 0, 1 and 2 scattering coefficients, intentionally omitting the path index of each representation. In the case of colored images, we apply independently a scattering transform to each RGB channel of the image, which means has a size equal to , and the original image is downsampled by a factor [6].
This representation is proved to linearize small deformations [24] of images, be nonexpansive and almost complete [10, 5], which makes it an ideal input to a deep network algorithm, that can build invariants to this local variability via a first linear operator. We discuss it as an ideal initialization in the next subsection.
We now motivate the use of a supervised architecture on top of a scattering network. Scattering transforms have yielded excellent numerical results [6]
on datasets where the variabilities are completely known, such as MNIST or FERET. In these task, the problems encountered are linked to sample and geometric variance and handling these variances leads to solving these problems. However, in classification tasks on more complex image datasets, such variabilities are only partially known as there are also non geometrical intraclass variabilities. Although applying the scattering transform on datasets like CIFAR or Caltech leads to nearly stateoftheart results in comparison to other unsupervised representations there is a large gap in performance when comparing to supervised representations
[27]. CNNs fill in this gap, thus we consider the use of deep neural networks utilizing generic scattering representations in order to reduce more complex variabilities than geometric ones.Recent works [25, 7, 17] have suggested that deep networks could build an approximation of the group of symmetries of a classification task and apply transformations along the orbits of this group, like convolutions. This group of symmetry corresponds to some of the noninformative intra class variabilities, which must be reduced by a supervised classifier. [25] motivates that to each layer corresponds an approximated Lie group of symmetry, and this approximation is progressive, in the sense that the dimension of these groups is increasing with depth. For instance, the main linear Lie group of symmetry of an image is the translation group, . In the case of a wavelet transform obtained by rotation of a mother wavelet, it is possible to recover a new subgroup of symmetry after a modulus nonlinearity, the rotation , and the group of symmetry at this layer is the rototranslation group: . If no nonlinearity was applied, a convolution along would be equivalent to a spatial convolution. Discovering explicitly the next new and nongeometrical groups of symmetry is however a difficult task [17]; nonetheless, the rototranslation group seems to be a good initialization for the first layers. In this work, we investigate this hypothesis and avoid learning those wellknown symmetries.
Thus, we consider two types of cascaded deep network on top of scattering. The first, referred to as the Shared Local Encoder
(SLE), learns a supervised local encoding of the scattering coefficients. We motivate and describe the SLE in the next subsection as an intermediate representation between unsupervised local pipelines, widely used in computer vision prior to 2012, and modern supervised deep feature learning approaches. The second, referred to as a hybrid CNN, is a cascade of a scattering network and a standard CNN architecture, such as a ResNet
[13]. In the sequel we empirically analyse hybrid CNNs, which permits to greatly reduce the spatial dimensions on which convolutions are learned and can reduce sample complexity.We now discuss the spatial support of different approaches, in order to motivate our local encoder for scattering. In CNNs constructed for large scale image recognition, the representations at a specific spatial location and depth depend upon large parts of the initial input image and thus mixes global information. For example, at depth 2 of [19], the effective spatial support of the corresponding filter is already 32 pixels (out of 224). The specific representations derived from CNNs trained on large scale image recognition are often used as representations in other computer vision tasks or datasets [40, 42].
On the other hand prior to 2012 local encoding methods led to state of the art performance on large scale visual recognition tasks [30]. In these approaches local neighborhoods of an image were encoded using method such as SIFT descriptors [23], HOG [9], and wavelet transforms [32]. They were also often combined with an unsupervised encoding, such as sparse coding [4] or Fisher Vectors(FVs) [30]. Indeed, many works in classical image processing or classification [18, 4, 30, 28] suggests that the local encoding of an image permit to describe efficiently an image. Additionally for some algorithms that rely on local neighbourhoods, the use of local descriptors is essential [23]
. Observe that a representation based on local non overlapping spatial neighborhood is simpler to analyze, as there is no adhoc mixing of spatial information. Nevertheless, on large scale classification, this approach was surpassed by fully supervised learned methods
[19].We show that it is possible to apply, a similarly local, yet supervised encoding algorithm to a scattering transform, as suggested in the conclusion of [28]. First observe that at each spatial position , a scattering coefficient corresponds to a descriptor of a local neighborhood of spatial size . As explained in the first Subsection 2.1
, each of our scattering coefficients are obtained using a stride of
, which means the final representation can be interpreted as a nonoverlapping concatenation of descriptors. Then, let be a cascade of fully connected layers that we identically apply on each . Then is a cascade of CNN operators with spatial support size , thus we write . In the sequel, we do not make any distinction between the CNN operators and the operator acting on . We refer to as a Shared Local Encoder. We note that similarly to , corresponds to nonoverlapping encoded descriptors. To learn a supervised classifier on a large scale image recognition task, we cascade fully connected layers on top of the SLE.Combined with a scattering network, the supervised SLE, has several advantages. Since the input corresponds to scattering coefficients, whose channels are structured, the first layer of is as well structured. We further explain and investigate this first layer in Subsection 3.2. Unlike standard CNNs, there is no linear combinations of spatial neighborhoods of the different feature maps, thus the analysis of this network need only focus on the channel axis. Observe that if was fed with raw images, for example in gray scale, it could not build any nontrivial operation except separating different level sets of these images.
In the next section, we investigate empirically this supervised SLE trained on the ILSVRC2012 dataset.
We evaluate the supervised SLE on the Imagenet ILSVRC2012 dataset. This is a large and challenging natural color image dataset consisting of million training images and validation images, divided into classes. We then show some unique properties of this network and evaluate its features on a separate task.
Method  Top 1  Top 5 

FV + FC [28] 
55.6  78.4 
FV + SVM [30]  54.3  74.3 
AlexNet  56.9  80.1 
Scat + SLE  57.0  79.6 
We first describe our training pipeline, which is similar to [41]
. We trained our network for 90 epochs to minimize the standard cross entropy loss, using SGD with momentum 0.9 and a batch size of 256. We used a weight decay of
. The initial learning rate is , and is dropped off by at epochs . During the training process, each image is randomly rescaled, cropped, and flipped as in [13]. The final crop size is . At testing, we rescale the image to a size of 256, and extract a center crop of size .We use an architecture which consists of a cascade of a scattering network, a SLE , followed by fully connected layers. Figure 2 describes our architecture. We select the parameter for our scattering network, which means the output representation has size spatially and 1251 in the channel dimension. is implemented as 3 layers of 1x1 convolutions
with layer size 1024. There are 2 fully connected layers of ouput size 1524. For all learned layers we use batch normalization
[16] followed by a ReLU [19] nonlinearity. We compute the mean and variance of the scattering coefficients on the whole Imagenet, and standardized each spatial scattering coefficients with it.Table 1 reports our numerical accuracies obtained with a single crop at testing, compared with local encoding methods, and the AlexNet that was the stateoftheart approach in 2012. We obtain 20.4% at Top 5 and 43.0% Top 1 errors. The performance is analogous to the AlexNet [19]. In term of architecture, our hybrid model is analogous, and comparable to that of [30, 28], for which SIFT features are extracted followed by FV [31] encoding. Observe the FV is an unsupervised encoding compared to our supervised encoding. Two approaches are then used: either the spatial localization is handled either by a Spatial Pyramid Pooling [20], which is then fed to a linear SVM, either the spatial variables are directly encoded in the FVs, and classified with a stack of four fully connected layers. This last method is a major difference with ours, as the obtained descriptor does not have a spatial indexing anymore which are instead quantified. Furthermore, in both case, the SIFT are densely extracted which correspond to approximatively descriptors, whereas in our case, only scattering coefficients are extracted. Indeed, we tackle the nonlinear aliasing (due to the fact the scattering transform is not oversampled) via random cropping during training, allowing to build an invariant to small translations. In Top 1, [30] and [28] obtain respectively 44.4% and 45.7%. Our method brings a substantial improvement of 1.4% and 2.7% respectively.
The BVLC AlexNet ^{3}^{3}3https://github.com/BVLC/caffe/wiki/ModelsaccuracyonImageNet2012val obtains a of 43.1% singlecrop Top 1 error, which is nearly equivalent to the 43.0% of our SLE network. The AlexNet has 8 learned layers and as explained before, large receptive fields. On the contrary, our training pipeline consists in 6 learned layers with constant receptive field of size , except for the fully connected layers that build a representation mixing spatial information from different locations. This is a surprising result, as it seems to suggest context information is only necessary at the very last layers, to reach AlexNet accuracy.
We study briefly the local SLE, which has only a spatial extent of , as a generic local image descriptor. We use the Caltech101 benchmark which is a dataset of 9144 image and 102 classes. We followed the standard protocol for evaluation [4] with 10 folds and evaluate per class accuracy, with 30 training samples per class, using a linear SVM used with the SLE descriptors. Applying our raw scattering network leads to an accuracy of , and the outputs features from brings respectively an absolute improvement of . The accuracy of the final SLE descriptor is thus , similar to that reported for the final AlexNet final layer in [42] and sparse coding with SIFT [4]. However in both cases spatial variability is removed, either by Spatial Pyramid Pooling [20], or the cascade of large filters. By contrasts the concatenation of SLE descriptors are completely local.
Finding structure in the kernel of the layers of depth less than [39, 42] is a complex task, and few empirical analyses exist that shed light on the structure [17] of deeper layers. A scattering transform with scale can be interpreted as a CNN with depth [27], whose channels indexes correspond to different scattering frequency indexes, which is a structuration. This structure is consequently inherited by the first layer of our SLE . We analyse and show that it builds explicitly invariance to local rotations, yet also that the Fourier bases associated to rotation are a natural bases of our operator. It is a promising direction to understand the nature of the two next layers.
We first establish some mathematical notions linked to the rotation group that we use in our analysis. For the sake of clarity, we do not consider the rototranslation group. For a given input image , let be the image rotated by angle , which corresponds to the linear action of rotation on images. Observe the scattering representation is covariant with the rotation in the following sense:
Besides, in the case of the second order coefficients, is covariant with rotations, but is an invariant to rotation that correspond to a relative rotation.
Unitary representation framework [36]
permits the building of a Fourier transform on compact group, like rotations. It is even possible to build a scattering transform on the rototranslation group
[33]. Fourier analysis permits the measurement of the smoothness of the operator and, in the case of CNN operator, it is a natural basis.We can now numerically analyse the nature of the operations performed along angle variables by the first layer of , with output size . Let us define as the restrictions of to the order 0,1,2 scattering coefficients respectively. Let an index of a feature channel and the color index. In this case, is simply the weights associated to the smoothing . depends only , and depends on . We would like to characterize the smoothness of these operators with respect to the variables , because is covariant to rotations.
To this end, we define by , the Fourier transform of these operators along the variables and respectively. These operator are expressed in the tensorial Frequency domain, which corresponds to a change of basis. In this experiment, we normalized each filter of such that they have a norm equal to 1, and each order of the scattering coefficients are normalized as well. Figure 3 shows the distribution of the amplitude of . We observe that the distribution is shaped as a Laplace distribution, which is an indicator of sparsity.
To illustrate that this is a natural basis we explicitly sparsify this operator in its frequency basis and verify that empirically the network accuracy is minimally changed. We do this by thresholding by the coefficients of the operators in the Fourier domain. Specifically we replace the operators , by and . We select an that sets of the coefficients to 0, which is indicated on Figure 3. Without retraining our network performance degrades by only an absolute value of worse on Top 1 and Top 5 ILSVRC2012. We have thus shown that this basis permits a sparse approximation of the first layer, . We now show evidence that this operator builds an explicit invariant to local rotations.
To aid our analysis we introduce the following quantities:
(1) 
They correspond to the energy propagated by for a given frequency, and permit to quantify the smoothness of our first layer operator w.r.t. the angular variables. Figure 4 shows variation of and along frequencies. For example, if and were convolutional along and
, these quantities would correspond to their respective singular values. One sees that the energy is concentrated in the low frequency domain, which indicates that
builds explicitly an invariant to local rotations.We now demonstrate cascading modern CNN architectures on top of the scattering network can produce high performance classification systems. We apply hybrid convolutional networks on the Imagenet ILSVRC 2012 dataset as well as the CIFAR10 dataset and show that they can achieve performance comparable to modern endtoend learned approaches. We then evaluate the hybrid networks in the setting of limited data by utilizing a subset of CIFAR10 as well as the STL10 dataset and show that we can obtain substantial improvement in performance over analogous endtoend learned CNNs.
Method  Top 1  Top 5  Params 

AlexNet  56.9  80.1  61M 
VGG16 [12]  68.5  88.7  138M 
Scat + Resnet10 (ours)  68.7  88.6  12.8M 
Resnet18 (ours)  68.9  88.8  11.7M 
Resnet200 [41]  78.3  94.2  64.7M 
Method  Accuracy 

Unsupervised Representations 

RotoScat + SVM [27]  82.3 
ExemplarCNN [11]  84.3 
DCGAN [29]  82.8 
Scat + FC (ours)  84.7 
Supervised and Hybrid  
Scat + Resnet (ours)  93.1 
Highway network [35]  92.4 
AllCNN [34]  92.8 
WRN 16  8 [41]  95.7 
WRN 28  10 [41]  96.0 
We showed in the previous section that a SLE followed by FC layers can produce results comparable with the AlexNet [19] on the Imagenet classification task. Here we consider cascading the scattering transform with a modern CNN architecture, such as Resnet [41, 13]. We take the Resnet18 [41], as a reference and construct a similar architecture with only 10 layers on top of the scattering network. We utilize a scattering transform with such that the CNN is learned over a spatial dimension of and a channel dimension of 651 (3 color channels of 217 each). The ResNet18 typically has 4 residual stages of 2 blocks each which gradually decrease the spatial resolution [41]. Since we utilize the scattering as a first stage we remove two blocks from our model. The network is described in Table 4.
Stage  Output size  Stage details 

scattering  channels  
conv1  2828  [256] 
conv2  2828  2 
conv3  1414  2 
avgpool  [] 
Stage  Output size  Stage details 

scattering  ,  
conv1  88, 2424  16k , 32k 
conv2  88, 2424  
conv3  88, 1212  
avgpool  [], [] 
We use the same optimization and data augmentation procedure described in Section 3.1 but with learning rate drops at 30, 60, and 80. We find that, when both methods are trained with the same settings of optimization and data augmentation, and when the number of parameters is similar (12.8M versus 11.7 M) the scattering network combined with a resnet can achieve analogous performance (11.4 Top 5 for our model versus 11.1 ), while utilizing fewer layers. The accuracy is reported in Table 2 and compared to other modern CNNs.
This demonstrates both that the scattering networks does not lose discriminative power and that it can be used to replace early layers of standard CNNs. We also note that learned convolutions occur over a drastically reduced spatial resolution without resorting to pretrained early layers which can potentially lose discriminative information or become too task specific.
We now consider the popular CIFAR10 dataset consisting of colored images composed of images for training, and images for testing divided into 10 classes. We perform two experiments, the first with a cascade of fully connected layers, that allows us to evaluate the scattering transform as an unsupervised representation. In a second experiment, we again use a hybrid CNN architecture with a ResNet built on top of the scattering transform.
For the scattering transform we used which means the output of the scattering stage will be spatially and 243 in the channel dimension. We follow the training procedure prescribed in [41] utilizing SGD with momentum of 0.9, batch size of 128, weigh decay of , and modest data augmentation of the dataset by using random cropping and flipping. The initial learning rate is 0.1, and we reduce it by a factor of 5 at epochs 60, 120 and 160. The models are trained for 200 epochs in total. We used the same optimization and data augmentation pipeline for training and evaluation in both case. We utilize batch normalization techniques at all layers which lead to a better conditioning of the optimization [16]. Table 3 reports the accuracy in the unsupervised and supervised settings and compares them to other approaches.
In the unsupervised comparison we consider the task of classification using only unsupervised features. Combining the scattering transform with a NN classifier consisting of 3 hidden layers, with width , we show that one can obtain a new state of the art classification for the case of unsupervised features. This approach outperforms all methods utilizing learned and not learned unsupervised features further demonstrating the discriminative power of the scattering network representation.
In the case of the supervised task we compare to stateoftheart approaches on CIFAR10, all based on endtoend learned CNNs. We use a similar hybrid architecture to the successful wide residual network (WRN) [41]. Specifically we modify the WRN of 16 layers which consists of 4 convolutional stages. Denoting the widening factor, , after the scattering output we use a first stage of . We add intermediate to increase the effective depth, without increasing too much the number of parameters. Finally we apply a dropout of 0.2 as specified in [41]. Using a width of 32 we achieve an accuracy of . This is superior to several benchmarks but performs worse than the original ResNet [13] and the wide resnet [41]. We note that training procedures for learning directly from images, including data augmentation and optimization settings, have been heavily optimized for networks trained directly on natural images, while we use them largely out of the box we do believe there are regularization techniques, normalization techniques, and data augmentation techniques which can be designed specifically for the scattering networks.
A major application of a hybrid representation is in the setting of limited data. Here the learning algorithm is limited in the variations it can observe or learn from the data, such that introducing a geometric prior can substantially improve performance. We evaluate our algorithm on the limited sample setting using a subset of CIFAR10 and the STL10 dataset.
We take subsets of decreasing size of the CIFAR dataset and train both baseline CNNs and counterparts that utilize the scattering as a first stage. We perform experiments using subsets of 1000, 500, and 100 samples, that are split uniformly amongst the 10 classes.
We use as a baseline the Wide ResNet [41] of depth 16 and width 8, which shows near stateoftheart performance on the full CIFAR10 task in the supervised setting. This network consists of 4 stages of progressively decreasing spatial resolution detailed in Table 1 of [41]. We construct a comparable hybrid architecture that removes a single stage and all strides, as the scattering already downsampled the spatial resolution. This architecture is described in Table 5. Unlike the baseline, refereed from hereon as WRN 168, our architecture has 12 layers and equivalent width, while keeping the spatial resolution constant through all stages prior to the final average pooling.
We use the same training settings for our baseline, WRN 168, and our hybrid scattering and WRN12. The settings are the same as those described for CIFAR10 in the previous section with the only difference being that we apply a multiplier to the learning rate schedule and to the maximum number of epochs. The multiplier is set to 10,20,100 for the 1000,500, and 100 sample case respectively. For example the default schedule of 60,120,160 becomes 600,1200,1600 for the case of 1000 samples and a multiplier of 10. Finally in the case of 100 samples we use a batch size of 32 in lieu of 128.
Table 6
corresponds to the averaged accuracy over 5 different subsets, with the corresponding standard error. In this small sample setting, a hybrid network outperforms the purely CNN based baseline, particularly when the sample size is smaller. This is not surprising as we incorporate a geometric prior in the representation.
Method  100  500  1000 

WRN 168 
34.7 0.8  46.5 1.4  60.0 1.8 
Scat + WRN 128  38.9 1.2  54.70.6  62.0 

Method  Accuracy 



Scat + WRN 198  76.0 0.6 
CNN[37]  70.1 0.6 
Exemplar CNN [11]  75.4 0.3 
Stacked whatwhere AE [43]  74.33 
Hierarchical Matching Pursuit (HMP) [3]  64.51 
Convolutional Kmeans Network [8] 
60.11 
The SLT10 dataset consists of colored images of size
, with only 5000 labeled images in the training set divided equally in 10 classes and 8000 images in the test set. The larger size of the images and the small number of available samples make this a challenging image classification task. The dataset also provides 100 thousand unlabeled images for unsupervised learning. We do not utilize these images in our experiments, yet we find we are able to outperform all methods which learn unsupervised representations using these unlabeled images, obtaining very competitive results on the STL10 dataset.
We apply a hybrid convolutional architecture, similar to the one applied in the small sample CIFAR task, adapted to the size of . The architecture is described in Table 5 and is similar to that used in the CIFAR small sample task. We use the same data augmentation as with the CIFAR datasets. We apply SGD with learning rate 0.1 and learning rate decay of 0.2 applied at epochs 1500,2000,3000,4000. Training is run for 5000 epochs. We use at training and evaluation the standard 10 folds procedure which takes 1000 training images. The averaged result for 10 folds is reported in Table 7. Unlike other approaches we do not use the 4000 remaining training image to perform hyperparameter tuning on each fold, as this is not representative of typical small sample situations, instead we train the same settings on each fold. The best reported result in the purely supervised case is a CNN [37, 11] whose hyper parameters have been automatically tuned using 4000 images for validation achieving 70.1 accuracy. The other competitive methods on this dataset utilize the unlabeled data to learn in an unsupervised manner before applying supervised methods. To compare with [14] we also train on the full training set of 5000 images obtaining an accuracy of on the test set, which is substantially higher than reported in [14] using unsupervised learning and the full unlabeled and labeled training set. The competing techniques add several hyper parameters and require an additional engineering process. Applying a hybrid network is on the other hand straightforward and is very competitive with all the existing approaches, without using any unsupervised learning.
In addition to showing hybrid networks perform well in the small sample regime these results, along with our unsupervised CIFAR10 results, suggest that completely unsupervised feature learning on natural image data, for downstream discriminative tasks, may still not outperform supervised learning methods and predefined representations. One possible explanation is that in the case of natural images, learning in an unsupervised way more complex variabilities than geometric ones ( e.g the rototranslation group), might be very challenging or possibly illposed.
This work demonstrates a competitive approach for large scale visual recognition, based on scattering networks, in particular for ILSVRC2012. When compared with unsupervised representation on CIFAR10 or small data regimes on CIFAR10 and STL10, we demonstrate stateoftheart results. We build a supervised Shared Local Encoder that permits the scattering networks to surpass other local encoding methods on ILSVRC2012. This network of just 3 learned layers permits analysis on the operation performed.
Our work also suggests that predefined features are still of interest and can provide enlightenment on deep learning techniques and to allow them to be more interpretable. Combined with appropriate learning methods, they could permit having more theoretical guarantees that are necessary to engineer better deep models and stable representations.
The authors would like to thank Mathieu Andreux, Matthew Blaschko, Carmine Cella, Bogdan Cirstea, Michael Eickenberg, Stéphane Mallat for helpful discussions and support. The authors would also like to thank Rafael Marini and Nikos Paragios for use of computing resources. We would like to thank Florent Perronnin for providing important details of their work. This work is funded by the ERC grant InvariantClass 320959, via a grant for PhD Students of the Conseil régional d’IledeFrance (RDMIdF), Internal Funds KU Leuven, FP7MCCIG 334380, an Amazon Academic Research Award, and DIGITEO 20130788D  SOPRANO.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 660–667, 2013.Discriminative unsupervised feature learning with convolutional neural networks.
In Advances in Neural Information Processing Systems, pages 766–774, 2014.