The Impact of Activation Sparsity on Overfitting in Convolutional Neural Networks

04/13/2021
by   Karim Huesmann, et al.
0

Overfitting is one of the fundamental challenges when training convolutional neural networks and is usually identified by a diverging training and test loss. The underlying dynamics of how the flow of activations induce overfitting is however poorly understood. In this study we introduce a perplexity-based sparsity definition to derive and visualise layer-wise activation measures. These novel explainable AI strategies reveal a surprising relationship between activation sparsity and overfitting, namely an increase in sparsity in the feature extraction layers shortly before the test loss starts rising. This tendency is preserved across network architectures and reguralisation strategies so that our measures can be used as a reliable indicator for overfitting while decoupling the network's generalisation capabilities from its loss-based definition. Moreover, our differentiable sparsity formulation can be used to explicitly penalise the emergence of sparsity during training so that the impact of reduced sparsity on overfitting can be studied in real-time. Applying this penalty and analysing activation sparsity for well known regularisers and in common network architectures supports the hypothesis that reduced activation sparsity can effectively improve the generalisation and classification performance. In line with other recent work on this topic, our methods reveal novel insights into the contradicting concepts of activation sparsity and network capacity by demonstrating that dense activations can enable discriminative feature learning while efficiently exploiting the capacity of deep models without suffering from overfitting, even when trained excessively.

READ FULL TEXT

page 4

page 10

page 11

research
02/21/2020

Exploiting the Full Capacity of Deep Neural Networks while Avoiding Overfitting by Targeted Sparsity Regularization

Overfitting is one of the most common problems when training deep neural...
research
03/07/2020

AL2: Progressive Activation Loss for Learning General Representations in Classification Neural Networks

The large capacity of neural networks enables them to learn complex func...
research
02/14/2022

Benign Overfitting in Two-layer Convolutional Neural Networks

Modern neural networks often have great expressive power and can be trai...
research
07/15/2021

Training for temporal sparsity in deep neural networks, application in video processing

Activation sparsity improves compute efficiency and resource utilization...
research
07/01/2023

Sparsity-aware generalization theory for deep neural networks

Deep artificial neural networks achieve surprising generalization abilit...
research
06/22/2022

Understanding the effect of sparsity on neural networks robustness

This paper examines the impact of static sparsity on the robustness of a...
research
01/18/2022

Design Space Exploration of Dense and Sparse Mapping Schemes for RRAM Architectures

The impact of device and circuit-level effects in mixed-signal Resistive...

Please sign up or login with your details

Forgot password? Click here to reset