A Too-Good-to-be-True Prior to Reduce Shortcut Reliance

02/12/2021
by   Nikolay Dagaev, et al.
0

Despite their impressive performance in object recognition and other tasks under standard testing conditions, deep convolutional neural networks (DCNNs) often fail to generalize to out-of-distribution (o.o.d.) samples. One cause for this shortcoming is that modern architectures tend to rely on "shortcuts" - superficial features that correlate with categories without capturing deeper invariants that hold across contexts. Real-world concepts often possess a complex structure that can vary superficially across contexts, which can make the most intuitive and promising solutions in one context not generalize to others. One potential way to improve o.o.d. generalization is to assume simple solutions are unlikely to be valid across contexts and downweight them, which we refer to as the too-good-to-be-true prior. We implement this inductive bias in a two-stage approach that uses predictions from a low-capacity network (LCN) to inform the training of a high-capacity network (HCN). Since the shallow architecture of the LCN can only learn surface relationships, which includes shortcuts, we downweight training items for the HCN that the LCN can master, thereby encouraging the HCN to rely on deeper invariant features that should generalize broadly. Using a modified version of the CIFAR-10 dataset in which we introduced shortcuts, we found that the two-stage LCN-HCN approach reduced reliance on shortcuts and facilitated o.o.d. generalization.

READ FULL TEXT

page 4

page 6

research
05/30/2018

Why do deep convolutional networks generalize so poorly to small image transformations?

Deep convolutional network architectures are often assumed to guarantee ...
research
03/18/2021

The Low-Rank Simplicity Bias in Deep Networks

Modern deep neural networks are highly over-parameterized compared to th...
research
07/31/2017

Capacity limitations of visual search in deep convolutional neural network

Deep convolutional neural networks follow roughly the architecture of bi...
research
02/20/2018

Do deep nets really need weight decay and dropout?

The impressive success of modern deep neural networks on computer vision...
research
05/22/2018

Deep learning generalizes because the parameter-function map is biased towards simple functions

Deep neural networks generalize remarkably well without explicit regular...
research
07/05/2022

Neural Networks and the Chomsky Hierarchy

Reliable generalization lies at the heart of safe ML and AI. However, un...
research
03/07/2021

Learn to Differ: Sim2Real Small Defection Segmentation Network

Recent studies on deep-learning-based small defection segmentation appro...

Please sign up or login with your details

Forgot password? Click here to reset