Local Propagation in Constraint-based Neural Network

02/18/2020
by   Giuseppe Marra, et al.
8

In this paper we study a constraint-based representation of neural network architectures. We cast the learning problem in the Lagrangian framework and we investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints, learning from the available supervisions. The computational structure of the proposed Local Propagation (LP) algorithm is based on the search for saddle points in the adjoint space composed of weights, neural outputs, and Lagrange multipliers. All the updates of the model variables are locally performed, so that LP is fully parallelizable over the neural units, circumventing the classic problem of gradient vanishing in deep networks. The implementation of popular neural models is described in the context of LP, together with those conditions that trace a natural connection with Backpropagation. We also investigate the setting in which we tolerate bounded violations of the architectural constraints, and we provide experimental evidence that LP is a feasible approach to train shallow and deep networks, opening the road to further investigations on more complex architectures, easily describable by constraints.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2020

A Lagrangian Approach to Information Propagation in Graph Neural Networks

In many real world applications, data are characterized by a complex str...
research
05/05/2020

Deep Lagrangian Constraint-based Propagation in Graph Neural Networks

Several real-world applications are characterized by data that exhibit a...
research
09/01/2020

Developing Constrained Neural Units Over Time

In this paper we present a foundational study on a constrained method th...
research
08/21/2018

Backpropagation and Biological Plausibility

By and large, Backpropagation (BP) is regarded as one of the most import...
research
05/20/2021

A Stochastic Composite Augmented Lagrangian Method For Reinforcement Learning

In this paper, we consider the linear programming (LP) formulation for d...
research
03/09/2021

Analytically Tractable Inference in Deep Neural Networks

Since its inception, deep learning has been overwhelmingly reliant on ba...
research
03/11/2021

Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Complete and Incomplete Neural Network Verification

Recent works in neural network verification show that cheap incomplete v...

Please sign up or login with your details

Forgot password? Click here to reset