MPLP: Learning a Message Passing Learning Protocol

07/02/2020
by   Ettore Randazzo, et al.
7

We present a novel method for learning the weights of an artificial neural network - a Message Passing Learning Protocol (MPLP). In MPLP, we abstract every operations occurring in ANNs as independent agents. Each agent is responsible for ingesting incoming multidimensional messages from other agents, updating its internal state, and generating multidimensional messages to be passed on to neighbouring agents. We demonstrate the viability of MPLP as opposed to traditional gradient-based approaches on simple feed-forward neural networks, and present a framework capable of generalizing to non-traditional neural network architectures. MPLP is meta learned using end-to-end gradient-based meta-optimisation. We further discuss the observed properties of MPLP and hypothesize its applicability on various fields of deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2022

MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields

Creating fast and accurate force fields is a long-standing challenge in ...
research
11/02/2016

CRF-CNN: Modeling Structured Information in Human Pose Estimation

Deep convolutional neural networks (CNN) have achieved great success. On...
research
10/26/2022

Meta-node: A Concise Approach to Effectively Learn Complex Relationships in Heterogeneous Graphs

Existing message passing neural networks for heterogeneous graphs rely o...
research
05/15/2023

Learning Linear Embeddings for Non-Linear Network Dynamics with Koopman Message Passing

Recently, Koopman operator theory has become a powerful tool for develop...
research
12/09/2022

Robust Graph Representation Learning via Predictive Coding

Predictive coding is a message-passing framework initially developed to ...
research
12/12/2012

Distributed Planning in Hierarchical Factored MDPs

We present a principled and efficient planning algorithm for collaborati...
research
11/11/2019

How data, synapses and neurons interact with each other: a variational principle marrying gradient ascent and message passing

Unsupervised learning requiring only raw data is not only a fundamental ...

Please sign up or login with your details

Forgot password? Click here to reset