Within-layer Diversity Reduces Generalization Gap

06/10/2021
by   Firas Laakom, et al.
6

Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer's overall diversity. By penalizing similarities and promoting diversity, we encourage each neuron to learn a distinctive representation and, thus, to enrich the data representation learned within the layer and to increase the total capacity of the model. We theoretically study how the within-layer activation diversity affects the generalization performance of a neural network and prove that increasing the diversity of hidden activations reduces the estimation error. In addition to the theoretical guarantees, we present an empirical study on three datasets confirming that the proposed approach enhances the performance of state-of-the-art neural network models and decreases the generalization gap.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/03/2023

WLD-Reg: A Data-dependent Within-layer Diversity Regularizer

Neural networks are composed of multiple layers arranged in a hierarchic...
research
06/04/2022

On the Generalization Power of the Overfitted Three-Layer Neural Tangent Kernel Model

In this paper, we study the generalization performance of overparameteri...
research
07/14/2021

Hierarchical Associative Memory

Dense Associative Memories or Modern Hopfield Networks have many appeali...
research
10/09/2020

Neural Random Projection: From the Initial Task To the Input Similarity Problem

In this paper, we propose a novel approach for implicit data representat...
research
03/13/2021

Conceptual capacity and effective complexity of neural networks

We propose a complexity measure of a neural network mapping function bas...
research
11/16/2015

Diversity Networks: Neural Network Compression Using Determinantal Point Processes

We introduce Divnet, a flexible technique for learning networks with div...
research
10/17/2022

Measures of Information Reflect Memorization Patterns

Neural networks are known to exploit spurious artifacts (or shortcuts) t...

Please sign up or login with your details

Forgot password? Click here to reset