Greedy InfoMax for Biologically Plausible Self-Supervised Representation Learning

05/28/2019
by   Sindy Löwe, et al.
0

We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximize the mutual information between its consecutive outputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets.

READ FULL TEXT
research
06/04/2021

Graph Barlow Twins: A self-supervised representation learning framework for graphs

The self-supervised learning (SSL) paradigm is an essential exploration ...
research
03/19/2021

Self-Supervised Classification Network

We present Self-Classifier – a novel self-supervised end-to-end classifi...
research
10/26/2017

Biologically Inspired Feedforward Supervised Learning for Deep Self-Organizing Map Networks

In this study, we propose a novel deep neural network and its supervised...
research
10/28/2019

Skip-Clip: Self-Supervised Spatiotemporal Representation Learning by Future Clip Order Ranking

Deep neural networks require collecting and annotating large amounts of ...
research
08/15/2023

MOLE: MOdular Learning FramEwork via Mutual Information Maximization

This paper is to introduce an asynchronous and local learning framework ...
research
10/03/2022

Module-wise Training of Residual Networks via the Minimizing Movement Scheme

Greedy layer-wise or module-wise training of neural networks is compelli...
research
08/04/2020

LoCo: Local Contrastive Representation Learning

Deep neural nets typically perform end-to-end backpropagation to learn t...

Please sign up or login with your details

Forgot password? Click here to reset