To update or not to update? Neurons at equilibrium in deep models

07/19/2022
by   Andrea Bragagnolo, et al.
0

Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters. Such a discovery has a broad impact from theory to applications, driving the research towards methods to identify the minimum subset of parameters to train without look-ahead information exploitation. However, the methods proposed do not match the state-of-the-art performance, and rely on unstructured sparsely connected models. In this work we shift our focus from the single parameters to the behavior of the whole neuron, exploiting the concept of neuronal equilibrium (NEq). When a neuron is in a configuration at equilibrium (meaning that it has learned a specific input-output relationship), we can halt its update; on the contrary, when a neuron is at non-equilibrium, we let its state evolve towards an equilibrium state, updating its parameters. The proposed approach has been tested on different state-of-the-art learning strategies and tasks, validating NEq and observing that the neuronal equilibrium depends on the specific learning setup.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2023

The ELM Neuron: an Efficient and Expressive Cortical Neuron Model Can Solve Long-Horizon Tasks

Traditional large-scale neuroscience models and machine learning utilize...
research
06/08/2021

On the Evolution of Neuron Communities in a Deep Learning Architecture

Deep learning techniques are increasingly being adopted for classificati...
research
02/24/2022

The rise of the lottery heroes: why zero-shot pruning is hard

Recent advances in deep learning optimization showed that just a subset ...
research
03/29/2022

Design strategies for controlling neuron-connected robots using reinforcement learning

Despite the growing interest in robot control utilizing the computation ...
research
04/29/2020

Equilibrium Propagation with Continual Weight Updates

Equilibrium Propagation (EP) is a learning algorithm that bridges Machin...
research
05/26/2023

Rotational Optimizers: Simple Robust DNN Training

The training dynamics of modern deep neural networks depend on complex i...
research
08/18/2019

Geometrical Regret Matching of Mixed Strategies

We argue that the existing regret matchings for equilibrium approximatio...

Please sign up or login with your details

Forgot password? Click here to reset