Curriculum By Texture

03/03/2020 ∙ by Samarth Sinha, et al. ∙ 5

Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification and segmentation. One factor for the success of CNNs is that they have an inductive bias that assumes a certain type of spatial structure is present in the data. Recent work by Geirhos et al. (2018) shows how learning in CNNs causes the learned CNN models to be biased towards high-frequency textural information, compared to low-frequency shape information in images. Many tasks generally requires both shape and textural information. Hence, we propose a simple curriculum based scheme which improves the ability of CNNs to be less biased towards textural information, and at the same time, being able to represent both the shape and textural information. We propose to augment the training of CNNs by controlling the amount of textural information that is available to the CNNs during the training process, by convolving the output of a CNN layer with a low-pass filter, or simply a Gaussian kernel. By reducing the standard deviation of the Gaussian kernel, we are able to gradually increase the amount of textural information available as training progresses, and hence reduce the texture bias. Such an augmented training scheme significantly improves the performance of CNNs on various image classification tasks, while adding no additional trainable parameters or auxiliary regularization objectives. We also observe significant improvements when using the trained CNNs to perform transfer learning on a different dataset, and transferring to a different task which shows how the learned CNNs using the proposed method act as better feature extractors.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Learning models have revolutionized the field of computer vision, which has led to great progress in recent years. Convolutional Neural Networks (CNNs) (LeCun et al., 1998), have shown to be a very effective class of models, which have enabled state-of-the-art performance on many computer vision tasks such as image recognition (Krizhevsky et al., 2012; He et al., 2016), semantic segmentation (Long et al., 2015; Ronneberger et al., 2015), object detection (Girshick, 2015; Ren et al., 2015), pose estimationn (Xiao et al., 2018), and many other.

CNNs’ success is due to their ability to learn meaningful representations of images. To be able to represent images effectively, CNNs perform convolutions on an image using learnable kernels, which give them a strong inductive bias towards local spatial equivariance. The strong inductive bias then allows them to be able to learn small spatial transformations and local features in an image.

Recently, Geirhos et al. (2018) showed that along with a spatial inductive bias, CNNs are also biased towards high-frequency information, or textural information, in images. This bias limits a CNN’s ability to utilize useful low-frequency or shape information that may be useful for prediction. Geirhos et al. (2018) also showed how the representational power of CNNs can be improved when they are forced to also utilize the available shape information in the visual data for predicting the label. But how best to design algorithms that allow CNNs to meaningfully use low-frequency information remains an open problem in computer vision. In this paper, we propose a simple method that alters a CNN’s training scheme such that it focuses on both the low and high-frequency information that is useful for inference.

Specifically, we propose to learn CNNs using a curriculum learning scheme, such that the textural information available to the network is progressively increased as training progresses. Hence, early in the training, by constraining the available textural information, the network is forced to optimize the training objective while using low-frequency information present in the input. Hence, by controlling the textural information in this manner, CNNs are actively encouraged to predict the output (e.g. class label) using the available low-frequency information, and as training progresses, the networks are able to improve their output predictions as more and more textural information is added.

We aim to control the flow of high-frequency information by convolving the outputs of each convolutional layer with a Gaussian kernel (Babaud et al., 1986). The Gaussian kernel acts as a low-pass filter, which hides high-frequency information from the network. For a Gaussian kernel, the standard deviation parameter controls the amount of high-frequency information that will be filtered; hence, by annealing the standard deviation of the Gaussian kernels, we can easily modulate the flow of textural information to the CNN over time. By simply annealing the standard deviation of the Gaussian kernels, we are able to improve the performance of the CNN on standard vision tasks. The proposed method also adds no additional trainable parameters, is generic and can be used for any CNN-variant.

Since our hypothesis is that our proposed method yields CNN that better capture “global” information in images, we evaluate how well they perform when fine-tuned on unseen vision tasks. Performing better on different vision tasks would mean that by controlling the textural information, we are able to learn networks that are fundamentally better as feature extractors, and not just specific to the task that they were trained for.

Our contributions can be summarized as:

  • We introduce a simple and effective solution that helps CNNs utilize useful low-frequency information in images by controlling the flow of textural information.

  • We conduct image classification experiments using standard vision datasets and CNN variants to see the importance of controlling the textural information during training.

  • We evaluate the models trained on ImageNet, with and without our proposed curriculum, as feature extractors to train “weak” classifiers on previously unseen data.

  • Furthermore, we also transfer the Imagenet trained CNNs, with and without our proposed method, to different vision tasks such as semantic segmentation and object detection; by outperforming models that were trained without texture control, we show that models learned using our proposed method are superior at feature extraction.

2 Preliminary

2.1 Notation

Given a labeled dataset of the form , represents the ground-truth label for the input image . For a given dataset, the network is optimized by


represents the task-specific, differentiable loss function and

is a parameterized neural network. Since our proposed method is a general modification to learning in CNNs, any task- specific can be used for training. To denote the convolutional operation of some kernel on some input , we will use

2.2 Convolutional Neural Networks

In deep learning, a typical CNN is composed of stacked trainable convolutional layers (LeCun et al., 1998), pooling layers (Boureau et al., 2010), and non-linearities (Nair and Hinton, 2010). A typical CNN layer can be mathematically represented as


where are the learned weights of the convolutional kernel, represents a pooling layer, is an example of a non-linearity (Nair and Hinton, 2010) and is the output of the hidden layer. In practice, different non-linearities such as Softplus (Glorot et al., 2011) may be used.

3 Related Work

3.1 Gaussian Kernels

Gaussian kernels are a deterministic functions of the size of the kernels and standard deviation . A 2d Gaussian kernel can be constructed using:

where represent the and spatial dimensions in the kernel.

Gaussian kernels have been extensively studied and used in traditional image processing and computer vision, in a field known as Scale Space Theory (Lindeberg, 2013, 1994; Sporring et al., 2013; Duits et al., 2004). Scale space theory aims to increase the scale-invariance of traditional computer vision algorithms by convolving the image with a Gaussian kernel. Scale space theory has been applied to corner detection (Zhong and Liao, 2007), optical flow (Alvarez et al., 1999) and modeling multi-scale landscapes (Blaschke and Hay, 2001). Gaussian kernels have also been widely used as low-pass filters in signal processing (Young and Van Vliet, 1995; Shin et al., 2005; Deng and Cahill, 1993).

Our work is significantly different from the previously proposed work, as to our knowledge no one has applied Gaussian kernels to the output of intermediate representations within a CNN. Since Gaussian kernels are deterministic functions, they are not trained using gradient descent, and do not require any gradient updates; therefore we are adding no additional trainable parameters to the network.

3.2 CNN Architectural Improvements

Since CNNs were proposed originally by LeCun et al. (1998), there have been many significant improvements proposed to stabilize training, and improve the expressiveness of these networks. Recently, deep CNNs have been popularized by Krizhevsky et al. (2012), and many deep architectural variants have since been popularized (He et al., 2016; Simonyan and Zisserman, 2014; Szegedy et al., 2015, 2016). There have also been different normalization methods that have been proposed to increase the network generalization (Srivastava et al., 2014; Scherer et al., 2010) and training of the models (Ioffe and Szegedy, 2015; Ba et al., 2016; Ulyanov et al., 2016; Wu and He, 2018; Ioffe, 2017). Yu and Koltun (2015) proposed to use dilated kernels in CNNs to learn scale-invariant features in images.

Our work is significantly different from the previously proposed techniques, both in motivation and practice. Instead of stabilizing the training for CNNs or making the networks scale-invariant, this work considers a different inductive bias of CNNs: texture bias. To our knowledge, no other work has proposed techniques to increase the shape-bias of a CNN by directly changing the architecture of the network.

3.3 Curriculum Learning

Curriculum learning was originally defined by Bengio et al. (2009)

as a way to train networks by organizing the order in which tasks are learned and incrementally increasing the difficulty of a task, as opposed to regular learning where all tasks are learned at the same time. Curriculum learning has been a popular area of study for reinforcement learning agents 

(Florensa et al., 2017; Sukhbaatar et al., 2017; Matiisen et al., 2019). Recent work has also proposed to use curriculum learning for RNNs (Graves et al., 2017; Zaremba and Sutskever, 2014). Our approach to curriculum learning is different from previously proposed work, as we automatically build a curriculum for CNNs to be able to learn more shape features. Instead of progressively increasing the difficulty of the task, we progressively increase the amount of textural information available.

3.4 Pre-trained CNNs

Pre-trained CNNs have been thoroughly explored in transfer learning (Huh et al., 2016; Kornblith et al., 2019), task transfer learning‘(Guillaumin and Ferrari, 2012; Long et al., 2015; Girshick, 2015; Ren et al., 2015), and domain adaptation (Tzeng et al., 2015, 2017; Hoffman et al., 2017). Pre-trained CNNs are used as feature extractors for many vision tasks, and effectively extracting features is an important problem. Typically, networks are pre-trained on a large-scale image classification dataset (commonly the ImageNet dataset (Russakovsky et al., 2015)), and transferred to a different downstream task. Since many classical vision tasks heavily rely on learning better feature extractors, it highlights the importance of learning CNNs which are able to extract shape and textural information from images.

4 Curriculum By Texture

In this section, we will first describe an effective way to reduce textural information found in the input using layers of Gaussian kernels, and then describe how to design a curriculum to augment the training of modern CNN architectures.

4.1 Gaussian Kernel Layer

Similar to a kernel in a convolutional layer, Gaussian kernels are a parameterized kernel with a standard deviation given by . The hyperparameter of the kernel controls how much of the output will be “blurred” after a convolution operation, as increasing results in a greater amount of blur. Another interpretation of a Gaussian kernel is as a low-pass filter, which masks high-frequency information from the input, depending on the chosen . By adding blur, you are taking away textural information from the input. Unlike the kernels of a CNN, Gaussian kernels are not trained via backpropogation, and are deterministic functions of .

We propose to augment the a given CNN with a Gaussian Kernel layer. The proposed addition of the Gaussian Kernel layer is proposed to be added to the output of each convolutional layer in a CNN. Formally, this can simply be added to Eqn. 1 as


where the is a Gaussian kernel with a chosen standard deviation .

By applying the Gaussian blur to the output of a convolutional layer, we smooth out features of the CNN outputs, and reduce textural information that the convolutional layer may have propagated. We perform this operation on the output of each CNN layer, to ensure that minimal textural information is being propagated through the network.

After the operations, there remains an increased amount of low-frequency information available in and since the network has to continue to optimize the task-objective, , the network is forced to learn meaningful features about the shapes of the objects that may be important for prediction. Learning about the shapes gives the CNN more global context on the input, , that is useful to predict the correct label, .

4.2 Designing a Curriculum

To design an effective curriculum for training CNNs, we use the key insight from Geirhos et al. (2018), which suggests how CNNs naturally tend towards using high-frequency information during training. We propose to bias the training of CNNs by first focusing on low-frequency information using a high value of for all in the network. By biasing the initial training, and annealing the value for as training progresses, the network naturally learns from the increased availability of textural information. But since the initial training was biased towards focusing on shape information, the network is able to exploit both modes as it is trained.

The bias of CNNs is very important in designing the curriculum since a trained network should utilize both high and low frequency spatial information. If CNNs did not already have a strong bias towards texture, then our proposed method would significantly hinder the ability of a network to learn about texture in images. But since texture is preferred over shape, the network can adapt to an increased amount of textural context as training progresses and is annealed. Controlling the flow of textural information provided to CNNs is a simple and general method for training CNNs. Our proposed method can be applied to any CNN based neural network.

A sample PyTorch-like code snippet is available below for a two-layer CNN, to illustrate its ease of implementation 

(Paszke et al., 2017).

1# Use the Gaussian kernel after the convolution operation
2h = gaussian_kernel(conv1(x))
4# Add non-linearity and pooling
5h = activation(pool(h))
7# Same operation after each conv. layer
8h = gaussian_kernel(conv2(h))
9h = activation(pool(h))

In the sample pseudo-code, gaussian_kernel, conv1 and conv2 represent convolutional operations on the input, pool is a pooling operation on the input, and activation

is some non-linear activation of choice, such as ReLU

(Nair and Hinton, 2010). Instead of a pooling layer, a normalization layer such as Batch Nomralization (Ioffe and Szegedy, 2015), may also be used. Our experiments in Section 5.1

utilize networks with Batch Normalization.

4.3 CNNs as Feature Extractors

A CNN trained on a large-scale dataset, such as ImageNet (Russakovsky et al., 2015), is able to learn useful representations and semantic relationships in natural images. The pretrained network can be used to extract features from an image to make the classification task easier. A better CNN model should be able to extract better features from an unseen image. Since our CNN is able to reason about shape and textural information, it should be able to extract richer representations from a previously unseen dataset.

To evaluate a model on its ability as a feature extractor, we simply freeze the weights of the model and train only a weak classifier on the feature outputs of the new dataset. The network that is able to extract more “meaningful” representations from the new dataset will result in better performance for the weak classifier.

4.4 Transfer Learning On Different Task

Learning useful features for a task that a network is trained on is important, and suggests that the network is able to extract useful task related features from labeled data, (, ). If a trained model is able to perform better on the trained task and is able to better adapt to a different task, then that shows that the learned model is a fundamentally superior feature extractor. Since many computer vision tasks require a pretrained model, we are able to test the ability of our curriculum augmented CNNs to learn better general features.

A network that utilizes both shape and textural attributes in images will be better at feature extraction since it will be able to infer more global features, that might be useful for task-transfer. By using our proposed method, we see that we can learn CNN models that are both better at the task they are trained on, and at task transfer since they because of their inductive bias to focus on shape and textural features in images.

5 Experiments

Through our experiments we aim to evaluate our proposed scheme by asking the following questions:

96.62 0.2 97.03 0.2 85.83 0.2 88.98 0.3 57.04 0.2 61.41 0.3
Table 1: Top-1 classification accuracy on CIFAR10, CIFAR100 and SVHN for CNNs trained normally and CNNs trained using Curriculum By Texture (CBT). All experiments are done with a VGG-16 network.
ImageNet + ImageNet + ImageNet + ImageNet +
CNN (Top-1) CBT (Top-1) CBT (Top-5) CBT (Top-5)
63.45 0.4 66.02 0.5 83.81 0.3 86.26 0.3
Table 2: Top-1 and Top-5 classification accuracy on ImageNet for CNNs trained normally and trained using Curriculum By Texture (CBT). All experiments are done with a VGG-16 network.
69.31 0.2 72.04 0.2 71.94 0.2 73.82 0.3 46.10 0.1 48.79 0.1
Table 3: Top-1 classification accuracy on CIFAR10, CIFAR100 and SVHN when the CNNs trained on ImageNet normally and CNNs trained using Curriculum By Texture (CBT) are used as feature extractors on a different dataset. The CNN weights are frozen, and the features from the images are used to train a 3-layer Multi-Layer Perception with ReLU activation.
Semantic Segmentation Object Detection
(% mIoU) (% mAP)
CNN 55.3 0.2 66.8 0.2
CBT 56.9 0.2 68.0 0.3
Table 4: Results for transfer learning on a different task on the Pascal VOC Dataset. For all semantic segmentation experiments we use Fully Convolutional Network with VGG-16 network, trained on ImageNet from Section 5.1. For all Object Detection experiments we use Fast-RCNN with the same VGG-16 backbone.
  • Better task performance: How does the performance vary when a network is trained with or without our curriculum learning method?

  • Better feature extraction: How does the trained network perform when it is used to extract features from a different dataset to train a weak classifier?

  • Better task-transfer learning: How does a trained model perform when it is fine-tuned and evaluated on a different vision task?

Since the primary purpose of our proposed method is to be general in its nature, we also compare our method across standard CNN variants (Section 5.4). We also report negative results in further support of the hypothesis that Curriculum By Texture reduces risks of overfitting on texture information (Section 5.6) and present an ablation study varying the use of an additional Gaussian kernel at different layers (Section 5.7).

We compare our method with the standard training procedure (without curriculum learning) for CNNs. Training a CNN with backpropogation is a very competitive baseline, as it is the prevalent training paradigm used (LeCun et al., 1998). In this section we will refer to a CNN trained normally as CNN and a CNN trained using Curriculum By Texture as CBT. Unless otherwise noted, for all experiments, except ImageNet, we use an initial of 2, a decay rate of , and decay

’s value every 5 epochs. For ImageNet we decay the value of

twice times every epoch, by the same factor, since the dataset is significantly larger in size.

5.1 Image Classification

For image classification we evaluate the performance of our curriculum based networks on standard vision datasets. We test our methods on CIFAR10, CIFAR100 (Krizhevsky et al., 2009) and SVHN (Goodfellow et al., 2013). CIFAR10 and CIFAR100 are image datasets with 50,000 samples, each with 10 and 100 classes, respectively. SVHN is a digit recognition task consisting of natural images of the 10 digits collected from “street view”, and it consists of 73,257 images. Furthermore, to prove that our network can scale to larger datasets, we evaluate on the ImageNet dataset (Russakovsky et al., 2015). The ImageNet dataset is a large-scale vision dataset consisting of over 1.2 million images spanning across 1,000 different classes.

Unless otherwise noted, we perform all our experiments using a VGG-16 network (Simonyan and Zisserman, 2014), trained using SGD with the same learning rate, momentum and weight decay as stated in the original paper, without any hyperparameter tuning. We also decay the learning rate in accordance to Simonyan and Zisserman (2014). The task objective, , for all the image classification experiments is a standard unweighted multi-class cross-entropy loss. For all experiments, except ImageNet, we report the mean accuracy over 5 different seeds. For ImageNet, we report the mean accuracy over 2 seeds. All experimental results for CIFAR10, CIFAR100 and SVHN are listed in Table 1 where we report the top-1 accuracy. The results for ImageNet are tabulated in Table 2, where we report the Top-1 and Top-5 classification accuracy.

We see that using our method, we are able to obtain better results across the three datasets in Table 1. By outperforming regularly trained CNNs on standard vision datasets, we expand on the findings from Geirhos et al. (2018), in showing that CNNs trained even on non-ImageNet datasets are biased towards texture. By augmenting the training process of CNNs, we are able to utilize the low frequency information in images, and hence improve the performance of the networks directly in a computationally efficient manner. Since the classes in all three datasets have important shape and textural features, it further shows how utilizing that information is useful for training. Another noteworthy observation from the results show that as the image dataset becomes more difficult, CBT networks are able to outperform the baseline CNN by an increasing margin.

Similarly, the ImageNet results further demonstrate how we are able to scale our method to work on large-scale image datasets. Geirhos et al. (2018) noted that they were able to see a similar boost in performance when they try to remove textural bias in the classifiers by training it on a handcrafted ImageNet variant, Stylized ImageNet, which removes all textural information from images. Instead, we are able to directly unbias the classifiers and achieve a significant boost in performance. By outperforming regular CNNs on Top-1 and Top-5 classification accuracy, CNNs trained using CBT can be seen to scale well to large datasets.

5.2 Feature Extraction

Utilizing the VGG-16 networks trained on ImageNet from Section 5.1, we freeze the CNN weights, and use a 3 layer fully connected network with 500 hidden units in each layer and ReLU activations (Nair and Hinton, 2010). In all the experiments, the networks are trained using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of for epochs. To test the ability of the network as a feature extractor, we test the network on the CIFAR10, CIFAR100 (Krizhevsky et al., 2009) and the SVHN dataset (Goodfellow et al., 2013). By freezing the weights of the model, we ensure that the only factor influencing the performance is the ability of the CNN to extract features from a different dataset than what it was originally trained on. Similary to 5.1, the task objective is an unweighted cross-entropy loss. The results of the experiment are summarized in Table 3.

Similar to image classification, we see a similar boost in performance even when we simply transfer the weights. We note that the observation that better ImageNet classifiers also better transfer to an unseen dataset has previously been explored in Kornblith et al. (2019), which shows that there is a strong correlation between the performance of a model trained on ImageNet and its ability to transfer when used as a feature extractor or after fine-tuning. Our contribution is to show that this improved transfer can be achieved not by changing the network’s architecture (which we fix to VGG), but by adjusting the training procedure. Indeed, Kornblith et al. (2019) note that successful transfer is quite sensitive to the inductive bias of training, with commonly used regularizers actually producing worse transfer performance, despite having good performance on ImageNet.

5.3 Transferring to Different Task

Similar to the ability of a network to generalize to unseen data, a trained CNN should also be able to adapt to a new task. A networks ability to adapt to a different downstream task is very important in computer vision as tasks such as semantic segmentation and object detection depend on pretrained large-scale classifiers (typically ImageNet) which are then fine-tuned on the task. In this section we evaluate the ability of CNNs trained with and without CBT to adapt to the new task of semantic segmentation and object detection.

For semantic segmentation we use a Fully Convolutional Network (FCN-32) (Long et al., 2015), with an ImageNet pretrained VGG-16 backbone from Section 5.1. For object detection we utilize a Faster-RCNN model (Ren et al., 2015), with the same pretrained VGG-16 backbone. We train each model with the same training setup as proposed in the original respective paper. We use the PASCAL-VOC datasets for both experiments. We do not tune any hyperparameter for both sets of experiments. is simply the pixel-wise unweighted cross-entropy loss for semantic segmentation. For object detection, is the sum of the regression (smooth -1) loss for bounding box prediction, and a classification (cross-entropy) loss for classifying the object in the bounding box. We report the networks for semantic segmentation using the mean Intersection over Union (mIoU) and for object detection using mean Average Precision (mAP). The results for both segmentation and detection are in Table 4.

We see that training the networks using CBT outperforms regular CNNs by a good margin for both tasks. The improvement in scores shows how the negative consequences added by the texture bias in CNNs impairs the learning process even when the evaluated task is completely different to the original task. The negative impact of the learned texture bias, which is likely a consequence of training CNNs on natural images, can be greatly alleviated by training CNNs using our method.

5.4 Network Invariance

ResNet-18 ResNet-18 + CBT
62.4 0.3 65.37 0.2
Table 5: Top-1 classification accuracy on CIFAR100 using ResNet-18, which are trained with and without CBT.

For a method to be considered a general solution, the method should be relatively agnostic to the nuances of the underlying network used. Since there are many different modern CNN variants, we evaluate our method on another popular CNN architecture: Residual Networks (ResNets) (He et al., 2016). ResNets use skip connection between a block of CNN layers to preserve information flow to the deeper layers. Since ResNets have also been show to be biased towards texture by Geirhos et al. (2018), it is important for our method to be effective for ResNets. For our experiments we evaluate the performance of ResNet-18 on the CIFAR100 dataset (Krizhevsky et al., 2009). The Top-1 classification accuracy is available in Table 5.

Similar to experiments with VGG-16, training ResNet-18 with CBT significantly outperforms CNNs trained without CBT. Since there exists prior work by Geirhos et al. (2018) showing that ResNets, as well as many other CNN-variants, have a strong texture bias it is sensible to assume that CBT will be able to significantly help for other image classifiers. The main purpose of this experiment is to show how any off-the-shelf CNN architecture can likely be improved by adding our method.

5.5 Timing Analysis

(seconds) (seconds)
24.35 25.80
Table 6: Time it takes to perform 100 gradient steps on the CIFAR100 dataset. Time is measure in seconds.

Training a model and performing inference in a time efficient manner is important to scale deep learning to tougher tasks. We analyze the time it takes for a VGG16-network to be trained for 100 gradient steps on the CIFAR100 dataset with a constant batch size of 128. To ensure fair comparison, all experiments are run in PyTorch (Paszke et al., 2017) using the same GPU: Nvidia GeForce GT 1030 111

The results for the timing analysis are summarized in Table 6. Table 6 shows that using Gaussian kernels makes training approximately 6% slower, compared to training a network normally. The additional cost is added due to the convolution operation using the Gaussian kernel added after each CNN convolution layer like showed in Alg. 4.2. The relatively small cost of additional compute is offset by the notable performance improvements in previous sections. Furthermore, the time can likely be further reduced if there are hardware-level optimizations added to make the operation faster since.

Since the Gaussian kernel layers do not add any trainable parameters, there is no additional memory requirements to train a CNN using CBT. The significant performance improvements and minimal computational overhead shows the usefulness and ease-of-use of our proposed method.

5.6 Validating the Texture Overfitting Hypothesis

Stylized Transferring
ImageNet to CUB
CNN 80.12 0.1 34.7 0.3
CBT 79.60 0.2 34.6 0.4
Table 7: Experiments on Stylized ImageNet and CUB, for which overfitting on texture image information is not expected.

To show that learning using CBT helps by removing texture overfitting in CNNs, we highlight two negative results. First, we perform experiments with the Stylized ImageNet222 dataset, which was proposed by Geirhos et al. (2018) to remove texture cues from images. If networks perform better by removing texture information from images, then both CNN and CBT experiments should ideally have the same result on this dataset. To train the model on Stylized ImageNet, we decided to first sample 40 different ImageNet classes, and use the linked code to generate the stylized dataset. Sampling the 40 classes, was done at random to ensure that the classes sampled are not inherently biased in any manner. The main point of this experiment is to show what happens when we train a network without texture information; we are not proposing a benchmark task. We then train the VGG-16 network on the 40-way classification task using the same hyperparameters as before.

The results in Table 7 match our hypothesis and show that CBT and CNNs have nearly identical performance on the dataset. If CBT was helping the learning of CNNs in some other manner, then its likely that there would be a performance increase compared to CNNs. But the lack of performance increase suggests that in the absence of textural information CBT is not able to aid learning.

Another interesting negative result comes from using the ImageNet pretrained VGG-16 network from Section 5.1 for feature extraction on the CalTech-UCSD Birds Dataset (CUB) (Welinder et al., 2010). CUB is another image dataset used for fine-grained image classification on different species of birds. We train a classifier in the same manner as Section 5.3 on the CUB dataset. Using the VGG-16 feature extractor, in Table 7 we see that the network trained with CBT is unable to outperform the network trained without CBT. We believe that the negative result is again likely due to the nature of the dataset. Since CUB is a dataset dependent on fine-grained classification on the species of birds, we believe that texture information is more crucial to this task. The shape of the birds is being largely similar between all species, adding a shape bias should not help with the performance of the model. Accordingly, we observe that the CNN trained with texture bias is able to perform equally to a CNN trained without it.

Interestingly, the texture-biased CNNs are also unable to clearly outperform CNNs trained using CBT. This suggests that Curriculum By Texture doesn’t simply remove useful texture information, but more precisely allows a better control of potential overfitting on that information. This is quite valuable, allowing the same framework to support situations where texture cues are important.

5.7 Ablation Study

None First Second Third All
57.04 60.03 59.87 59.88 61.41
Table 8:

Ablation study by applying a Gaussian kernel on only specific layers of a simple 3 layer CNN with Max-Pooling and ReLU activations.

Bolded value corresponds to the proposed model which uses the kernel layer after each convolutional layer.

Another important factor to investigate is to look at how applying a Gaussian kernel to each layer changes the performance of the CNN. For this experiment we use a simple 3-layer CNN, with Max-Pooling and ReLU activations. We apply the kernel to each layer individually and evaluate its performance on the CIFAR10 dataset. For comparison, we report a network with no Gaussian kernel layers, and a network with Gaussian kernel layers after each convolutional layer (proposed model). The results are presented in Table 8.

Clearly, the best performing model is the proposed model: when one applies Gaussian kernels after each layer. But just by applying the kernel to any layer, there is a notable boost in performance over the baseline. The difference between adding the kernel to any one layer is relatively small, but the results suggest that adding it to the first layer helps with performance the most, while adding it to either the second layer or third layer has similar performance.

With this insight, to perform any time critical training or inference using CNNs, the models can be trained using the Gaussian kernel layers only on the first few CNN layers. This way, if there is to be a trade-off between accuracy and inference time, this understanding can help with finding the optimal balance between the two factors.

6 Conclusion

In this paper we describe a simple yet effective technique to minimize the phenomena of CNNs overfitting to textural information in images by convolving the output of CNNs with a Gaussian kernel. By controlling the rate at which the network sees texture, we are able to improve the shape-bias in the networks, and thereby improve the performance of the models. Using our technique, Curriculum By Texture, we are able to learn CNNs that perform better on the task of image classification, perform better generalization when used as feature extractors on unseen datasets and

are able to better adapt to a different downstream task. We also experimentally show that our proposed method can work on different CNN architectures and techniques such as Batch Normalization and pooling layers, without adding a significant computational overhead. We also extend the knowledge on texture bias, by showing that not just ImageNet classifiers that share the bias; but in fact a model trained on other natural image datasets, have similar tendencies. We speculate that this phenomena is likely due to the nuances of learning with natural images. This phenomena can be further investigated by training CNNs on binarized images, or in other contexts where texture is more

important than shape, such as fine grained classification in Section 5.6.

Our work motivates future theoretical work on the fundamental reason why the texture bias exists in the first place. Theoretical insight on the texture overfitting in CNNs may help develop other methods to overcome the bias by improving upon our work.

Learning kernel layers that can automatically set the value of

is also an interesting extension of our proposed method, since it reduces the number of hyperparameters that need to be tuned. Other future extensions to this work may be applied to unsupervised learning methods such as GANs

(Goodfellow et al., 2014) and VAEs (Kingma and Welling, 2013), where both models use deep convolutional architectures. Lastly, applying a similar technique on Self-Attention might also be on interest to the community as the learning bias of Self-Attention is not completely understood (Vaswani et al., 2017).

7 Acknowledgements

We would like to thank Anirudh Goyal for insightful discussions and helpful feedback on the draft. We would also like to thank Jiajun Wu for insightful initial discussions. We acknowledge the funding from the Canada CIFAR AI Chairs program. Finally, we would like to acknowledge Nvidia for donating DGX-1, and Vector Institute for providing resources for this research.


  • L. Alvarez, J. Sánchez, and J. Weickert (1999) A scale-space approach to nonlocal optical flow calculations. In International conference on scale-space theories in computer vision, pp. 235–246. Cited by: §3.1.
  • J. L. Ba, J. R. Kiros, and G. E. Hinton (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §3.2.
  • J. Babaud, A. P. Witkin, M. Baudin, and R. O. Duda (1986) Uniqueness of the gaussian kernel for scale-space filtering. IEEE Transactions on Pattern Analysis & Machine Intelligence (1), pp. 26–33. Cited by: §1.
  • Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48. Cited by: §3.3.
  • T. Blaschke and G. J. Hay (2001) Object-oriented image analysis and scale-space: theory and methods for modeling and evaluating multiscale landscape structure. International Archives of Photogrammetry and Remote Sensing 34 (4), pp. 22–29. Cited by: §3.1.
  • Y. Boureau, J. Ponce, and Y. LeCun (2010) A theoretical analysis of feature pooling in visual recognition. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 111–118. Cited by: §2.2.
  • G. Deng and L. Cahill (1993) An adaptive gaussian filter for noise reduction and edge detection. In 1993 IEEE conference record nuclear science symposium and medical imaging conference, pp. 1615–1619. Cited by: §3.1.
  • R. Duits, L. Florack, J. De Graaf, and B. ter Haar Romeny (2004) On the axioms of scale space theory. Journal of Mathematical Imaging and Vision 20 (3), pp. 267–298. Cited by: §3.1.
  • C. Florensa, D. Held, M. Wulfmeier, M. Zhang, and P. Abbeel (2017) Reverse curriculum generation for reinforcement learning. arXiv preprint arXiv:1707.05300. Cited by: §3.3.
  • R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel (2018) ImageNet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231. Cited by: Curriculum By Texture, §1, §4.2, §5.1, §5.1, §5.4, §5.4, §5.6.
  • R. Girshick (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §1, §3.4.
  • X. Glorot, A. Bordes, and Y. Bengio (2011) Deep sparse rectifier neural networks. In

    Proceedings of the fourteenth international conference on artificial intelligence and statistics

    pp. 315–323. Cited by: §2.2.
  • I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet (2013) Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082. Cited by: §5.1, §5.2.
  • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §6.
  • A. Graves, M. G. Bellemare, J. Menick, R. Munos, and K. Kavukcuoglu (2017) Automated curriculum learning for neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1311–1320. Cited by: §3.3.
  • M. Guillaumin and V. Ferrari (2012) Large-scale knowledge transfer for object localization in imagenet. In

    2012 IEEE Conference on Computer Vision and Pattern Recognition

    pp. 3202–3209. Cited by: §3.4.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1, §3.2, §5.4.
  • J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros, and T. Darrell (2017) Cycada: cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213. Cited by: §3.4.
  • M. Huh, P. Agrawal, and A. A. Efros (2016) What makes imagenet good for transfer learning?. arXiv preprint arXiv:1608.08614. Cited by: §3.4.
  • S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §3.2, §4.2.
  • S. Ioffe (2017) Batch renormalization: towards reducing minibatch dependence in batch-normalized models. In Advances in neural information processing systems, pp. 1945–1953. Cited by: §3.2.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.2.
  • D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §6.
  • S. Kornblith, J. Shlens, and Q. V. Le (2019) Do better imagenet models transfer better?. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2661–2671. Cited by: §3.4, §5.2.
  • A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §5.1, §5.2, §5.4.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1, §3.2.
  • Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §1, §2.2, §3.2, §5.
  • T. Lindeberg (1994) Scale-space theory: a basic tool for analyzing structures at different scales. Journal of applied statistics 21 (1-2), pp. 225–270. Cited by: §3.1.
  • T. Lindeberg (2013) Scale-space theory in computer vision. Vol. 256, Springer Science & Business Media. Cited by: §3.1.
  • J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Cited by: §1, §3.4, §5.3.
  • T. Matiisen, A. Oliver, T. Cohen, and J. Schulman (2019) Teacher-student curriculum learning. IEEE transactions on neural networks and learning systems. Cited by: §3.3.
  • V. Nair and G. E. Hinton (2010) Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807–814. Cited by: §2.2, §4.2, §5.2.
  • A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.2, §5.5.
  • S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §1, §3.4, §5.3.
  • O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §1.
  • O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. (2015) Imagenet large scale visual recognition challenge. International journal of computer vision 115 (3), pp. 211–252. Cited by: §3.4, §4.3, §5.1.
  • D. Scherer, A. Müller, and S. Behnke (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In International conference on artificial neural networks, pp. 92–101. Cited by: §3.2.
  • D. Shin, R. Park, S. Yang, and J. Jung (2005)

    Block-based noise estimation using adaptive gaussian filtering

    IEEE Transactions on Consumer Electronics 51 (1), pp. 218–226. Cited by: §3.1.
  • K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §3.2, §5.1.
  • J. Sporring, M. Nielsen, L. Florack, and P. Johansen (2013) Gaussian scale-space theory. Vol. 8, Springer Science & Business Media. Cited by: §3.1.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov (2014) Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §3.2.
  • S. Sukhbaatar, Z. Lin, I. Kostrikov, G. Synnaeve, A. Szlam, and R. Fergus (2017) Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407. Cited by: §3.3.
  • C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9. Cited by: §3.2.
  • C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §3.2.
  • E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko (2015) Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4068–4076. Cited by: §3.4.
  • E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell (2017) Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: §3.4.
  • D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §3.2.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §6.
  • P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona (2010) Caltech-ucsd birds 200. Cited by: §5.6.
  • Y. Wu and K. He (2018) Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19. Cited by: §3.2.
  • B. Xiao, H. Wu, and Y. Wei (2018)

    Simple baselines for human pose estimation and tracking

    In Proceedings of the European Conference on Computer Vision (ECCV), pp. 466–481. Cited by: §1.
  • I. T. Young and L. J. Van Vliet (1995) Recursive implementation of the gaussian filter. Signal processing 44 (2), pp. 139–151. Cited by: §3.1.
  • F. Yu and V. Koltun (2015) Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122. Cited by: §3.2.
  • W. Zaremba and I. Sutskever (2014) Learning to execute. arXiv preprint arXiv:1410.4615. Cited by: §3.3.
  • B. Zhong and W. Liao (2007) Direct curvature scale space: theory and corner detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (3), pp. 508–512. Cited by: §3.1.