Fixing the train-test resolution discrepancy: FixEfficientNet

03/18/2020 ∙ by Hugo Touvron, et al. ∙ 16

This note complements the paper "Fixing the train-test resolution discrepancy" that introduced the FixRes method. First, we show that this strategy is advantageously combined with recent training recipes from the literature. Most importantly, we provide new results for the EfficientNet architecture. The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters. For instance, our FixEfficientNet-B0 trained without additional training data achieves 79.3 absolute improvement over the Noisy student EfficientNet-B0 trained with 300M unlabeled images and +1.7 adversarial examples. An EfficientNet-L2 pre-trained with weak supervision on 300M unlabeled images and further optimized with FixRes achieves 88.5 accuracy (top-5: 98.7 ImageNet with a single crop.



There are no comments yet.


page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In order to obtain the best possible performance from Convolutional neural nets (CNNs), the training and testing data distributions should match. However, in image recognition, data pre-processing procedures are often different for training and testing: the most popular practice is to extract a rectangle with random coordinates from the image to artificially increase the amount of training data. This Region of Classification (RoC) is then resized to obtain an image, or crop, of a fixed size (in pixels) that is fed to the CNN. At test time, the RoC is instead set to a square covering the central part of the image, which results in the extraction of a center crop

. Thus, while the crops extracted at training and test time have the same size, they arise from different RoCs, which skews the distribution of data seen by the CNN.

Figure 1: Improvement brought by FixRes (in bold) to several popular architectures from the literature. Our FixEfficientNet (orange curve) surpasses all EfficientNet models, in particular the models trained with Noisy student (red curve) and those trained with adversarial examples (blue curve). The sws models correspond to the model of the article [Yalniz2019BillionscaleSL].

Over the years, training and testing pre-processing procedures have evolved, but so far they have been optimized separately [Ekin2018AutoAugment]. Touvron et al. show [Touvron2019FixRes] that this separate optimization has a detrimental effect on the test-time performance of models. They address this problem with the FixRes method, which jointly optimizes the choice of resolutions and scales at training and test time, while keeping the same RoC sampling.

We apply this method to the recent EfficientNet [tan2019efficientnet] architecture, which offers an excellent compromise between number of parameters and good performance. This short note show that properly combining FixRes and EfficientNet significantly improves the current state of the art [tan2019efficientnet]. Noticeably,

  • We report the best performance without external data on ImageNet (top1: 85.7%);

  • We report the best accuracy (top1: 88.5%) with external data on ImageNet;

  • We report several state-of-the-art compromises between accuracy and number of parameters, see Figure 1.

2 Training with FixRes: updates

Recent research in image classification tends towards larger networks and higher resolution images [Yanping2018GPipe, mahajan2018exploring, Xie2019SelftrainingWN]. For instance, the state-of-the-art in the ImageNet ILSVRC 2012 benchmark is currently held by the EfficientNet-L2 [Xie2019SelftrainingWN] architecture with 480M parameters using 800800 images for training. Similarly, the state-of-the-art model learned from scratch is currently EfficientNet-B8 [Xie2019AdversarialEI] with 88M parameters using 672672 images for training. In this note, we focus on the EfficientNet architecture [tan2019efficientnet] due to its good accuracy/cost trad-off and its popularity.

Data augmentation

is routinely employed at training time to improve model generalization and reduce overfitting. In this note, we use the same augmentation setup as in the original FixRes paper [Touvron2019FixRes]. We have only integrated label smoothing to underline their complementarity.


is a very simple method that amounts to re-training the classifier or a few top layers at the target resolution. Therefore, it has several advantages: (1) it is computationally cheap because the back-propagation does not need to be performed on the whole network; (2) it can be applied to any CNN architecture and is complementary with the other tricks mentioned above; (3) it can be applied on a network that comes from an unknown, possibly closed source, that is selected for its performance on low-resolution images.

Therefore, it is easy and natural to experiment with FixRes on the current state-of-the-art EfficientNet CNN. This is what we do in the next section.

3 Experiments

We experiment on the ImageNet-2012 benchmark [Russakovsky2015ImageNet12], reporting validation performance as top-1 accuracy.

3.1 Architectures

In this note we use the EfficientNet [tan2019efficientnet] architecture. Mainly these two versions giving the best performance: EfficientNet trained with adversial examples [Xie2019AdversarialEI], and EfficientNet trained with Noisy student [mahajan2018exploring] that are pre-trained in a weakly-supervised fashion on 300 million unlabeled images.

We use the EfficientNet models pretrained from rwightman’s GitHub repository [pretrainedEffnet]

. This models have been converted from the original Tensorflow to PyTorch.

3.2 Training

We mostly follow the FixRes [Touvron2019FixRes] training protocol. The only difference is that we combine the FixRes data-augmentation with label smoothing during the fine-tuning stage.

extra training EfficientNet Noisy Student [Xie2019SelftrainingWN] FixEfficientNet Model #params Training resolution test res Top-1 (%) Top-5 (%) test res Top-1 (%) Top-5 (%) B0 5.3M 224 224 78.8 94.5 320 80.2 95.4 B1 7.8M 240 240 81.5 95.8 384 82.6 96.5 B2 9.2M 260 260 82.4 96.3 420 83.6 96.9 B3 12M 300 300 84.1 96.9 472 85.0 97.4 B4 19M 380 380 85.3 97.5 472 85.9 97.7 B5 30M 456 456 86.1 97.8 576 86.4 97.9 B6 43M 528 528 86.4 97.9 680 86.7 98.0 B7 66M 600 600 86.9 98.1 632 87.1 98.2 L2 480M 475 800 88.4 98.7 600 88.5 98.7
Table 1: State of the art on ImageNet with models pre-trained with Noisy student [Xie2019SelftrainingWN] on 300M unlabeled images (single Crop evaluation).
extra training EfficientNet AdvProp [Xie2019AdversarialEI] FixEfficientNet Model #params Training resolution test res Top-1 (%) Top-5 (%) test res Top-1 (%) Top-5 (%) B0 5.3M _ 224 224 77.6 93.3 320 79.3 94.6 B1 7.8M _ 240 240 79.6 94.3 384 81.3 95.7 B2 9.2M _ 260 260 80.5 95.0 420 82.0 96.0 B3 12M _ 300 300 81.9 95.6 472 83.0 96.4 B4 19M _ 380 380 83.3 96.4 512 84.0 97.0 B5 30M _ 456 456 84.3 97.0 576 84.7 97.2 B6 43M _ 528 528 84.8 97.1 576 84.9 97.3 B7 66M _ 600 600 85.2 97.2 632 85.3 97.4 B8 87.4M _ 672 672 85.5 97.3 800 85.7 97.6
Table 2: State of the art on ImageNet without external data (single Crop evaluation). We compare our results with those of the paper [Xie2019AdversarialEI]

3.3 Comparison with the state of the art

Table 1 and Table 2 compare our results with those of EfficientNet reported in the literature. All our FixEfficientNets outperform the corresponding EfficientNet (see Figure 1). As a result and to the best of our knowledge, our FixEfficientNet-L2 surpasses all other models available in the literature. It achieves 88.5% Top-1 accuracy and 98.7% Top-5 accuracy on the ImageNet-2012 validation benchmark [Russakovsky2015ImageNet12].

4 Conclusion

FixRes is a method that can improve the performance of any model. It is a method that is applied after the conventional training which gives it a very great flexibility. Indeed, it can be easily integrated into any existing training pipeline. For example, in the article [Xie2019SelftrainingWN] although it is no longer state of the art on ImageNet, they use FixRes to get their best performance.

We provide an open-source implementation of our method, which is available at