Impact of Aliasing on Generalization in Deep Convolutional Networks

08/07/2021
by   Cristina Vasconcelos, et al.
19

We investigate the impact of aliasing on generalization in Deep Convolutional Networks and show that data augmentation schemes alone are unable to prevent it due to structural limitations in widely used architectures. Drawing insights from frequency analysis theory, we take a closer look at ResNet and EfficientNet architectures and review the trade-off between aliasing and information loss in each of their major components. We show how to mitigate aliasing by inserting non-trainable low-pass filters at key locations, particularly where networks lack the capacity to learn them. These simple architectural changes lead to substantial improvements in generalization on i.i.d. and even more on out-of-distribution conditions, such as image classification under natural corruptions on ImageNet-C [11] and few-shot learning on Meta-Dataset [26]. State-of-the art results are achieved on both datasets without introducing additional trainable parameters and using the default hyper-parameters of open source codebases.

READ FULL TEXT

page 1

page 7

page 14

page 16

research
07/05/2022

Generalization to translation shifts: a study in architectures and augmentations

We provide a detailed evaluation of various image classification archite...
research
02/14/2018

On the Blindspots of Convolutional Networks

Deep convolutional network has been the state-of-the-art approach for a ...
research
01/19/2016

Understanding Deep Convolutional Networks

Deep convolutional networks provide state of the art classifications and...
research
01/30/2015

Hyper-parameter optimization of Deep Convolutional Networks for object recognition

Recently sequential model based optimization (SMBO) has emerged as a pro...
research
08/05/2016

Fusing Deep Convolutional Networks for Large Scale Visual Concept Classification

Deep learning architectures are showing great promise in various compute...
research
11/23/2021

Using mixup as regularization and tuning hyper-parameters for ResNets

While novel computer vision architectures are gaining traction, the impa...
research
10/23/2021

Parametric Variational Linear Units (PVLUs) in Deep Convolutional Networks

The Rectified Linear Unit is currently a state-of-the-art activation fun...

Please sign up or login with your details

Forgot password? Click here to reset