InBiaseD: Inductive Bias Distillation to Improve Generalization and Robustness through Shape-awareness

06/12/2022
by   Shruthi Gowda, et al.
12

Humans rely less on spurious correlations and trivial cues, such as texture, compared to deep neural networks which lead to better generalization and robustness. It can be attributed to the prior knowledge or the high-level cognitive inductive bias present in the brain. Therefore, introducing meaningful inductive bias to neural networks can help learn more generic and high-level representations and alleviate some of the shortcomings. We propose InBiaseD to distill inductive bias and bring shape-awareness to the neural networks. Our method includes a bias alignment objective that enforces the networks to learn more generic representations that are less vulnerable to unintended cues in the data which results in improved generalization performance. InBiaseD is less susceptible to shortcut learning and also exhibits lower texture bias. The better representations also aid in improving robustness to adversarial attacks and we hence plugin InBiaseD seamlessly into the existing adversarial training schemes to show a better trade-off between generalization and robustness.

READ FULL TEXT

page 3

page 5

page 6

page 8

page 12

page 16

page 17

research
11/20/2019

Exploring the Origins and Prevalence of Texture Bias in Convolutional Neural Networks

Recent work has indicated that, unlike humans, ImageNet-trained CNNs ten...
research
02/16/2022

A Developmentally-Inspired Examination of Shape versus Texture Bias in Machines

Early in development, children learn to extend novel category labels to ...
research
05/30/2022

Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning

Recurrent neural networks have a strong inductive bias towards learning ...
research
05/02/2019

Full-Jacobian Representation of Neural Networks

Non-linear functions such as neural networks can be locally approximated...
research
11/23/2022

BiasBed – Rigorous Texture Bias Evaluation

The well-documented presence of texture bias in modern convolutional neu...
research
12/04/2022

Recognizing Object by Components with Human Prior Knowledge Enhances Adversarial Robustness of Deep Neural Networks

Adversarial attacks can easily fool object recognition systems based on ...
research
02/08/2018

Learning Inductive Biases with Simple Neural Networks

People use rich prior knowledge about the world in order to efficiently ...

Please sign up or login with your details

Forgot password? Click here to reset