WLD-Reg: A Data-dependent Within-layer Diversity Regularizer

01/03/2023
by   Firas Laakom, et al.
0

Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer's overall diversity. We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks. The code is publically available at <https://github.com/firasl/AAAI-23-WLD-Reg>

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2021

Within-layer Diversity Reduces Generalization Gap

Neural networks are composed of multiple layers arranged in a hierarchic...
research
10/31/2018

Conceptual Content in Deep Convolutional Neural Networks: An analysis into multi-faceted properties of neurons

In this paper we analyze convolutional layers of VGG16 model pre-trained...
research
06/05/2021

Convolutional Neural Networks with Gated Recurrent Connections

The convolutional neural network (CNN) has become a basic model for solv...
research
07/07/2020

SpinalNet: Deep Neural Network with Gradual Input

Over the past few years, deep neural networks (DNNs) have garnered remar...
research
08/28/2023

Multilayer Multiset Neuronal Networks – MMNNs

The coincidence similarity index, based on a combination of the Jaccard ...
research
07/14/2021

Hierarchical Associative Memory

Dense Associative Memories or Modern Hopfield Networks have many appeali...
research
11/16/2015

Diversity Networks: Neural Network Compression Using Determinantal Point Processes

We introduce Divnet, a flexible technique for learning networks with div...

Please sign up or login with your details

Forgot password? Click here to reset