Impact of Noise on Calibration and Generalisation of Neural Networks

06/30/2023
by   Martin Ferianc, et al.
0

Noise injection and data augmentation strategies have been effective for enhancing the generalisation and robustness of neural networks (NNs). Certain types of noise such as label smoothing and MixUp have also been shown to improve calibration. Since noise can be added in various stages of the NN's training, it motivates the question of when and where the noise is the most effective. We study a variety of noise types to determine how much they improve calibration and generalisation, and under what conditions. More specifically we evaluate various noise-injection strategies in both in-distribution (ID) and out-of-distribution (OOD) scenarios. The findings highlight that activation noise was the most transferable and effective in improving generalisation, while input augmentation noise was prominent in improving calibration on OOD but not necessarily ID data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2022

A Study on the Impact of Data Augmentation for Training Convolutional Neural Networks in the Presence of Noisy Labels

Label noise is common in large real-world datasets, and its presence har...
research
06/08/2020

A Diffractive Neural Network with Weight-Noise-Injection Training

We propose a diffractive neural network with strong robustness based on ...
research
07/12/2023

Data Augmentation in Training CNNs: Injecting Noise to Images

Noise injection is a fundamental tool for data augmentation, and yet the...
research
03/22/2020

Improving Calibration in Mixup-trained Deep Neural Networks through Confidence-Based Loss Functions

Deep Neural Networks (DNN) represent the state of the art in many tasks....
research
02/24/2022

Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration

Diverse data augmentation strategies are a natural approach to improving...
research
04/02/2021

Misclassification-Aware Gaussian Smoothing improves Robustness against Domain Shifts

Deep neural networks achieve high prediction accuracy when the train and...
research
03/23/2023

Benchmarking the Reliability of Post-training Quantization: a Particular Focus on Worst-case Performance

Post-training quantization (PTQ) is a popular method for compressing dee...

Please sign up or login with your details

Forgot password? Click here to reset