Neural Function Modules with Sparse Arguments: A Dynamic Approach to Integrating Information across Layers

07/24/2021
by   Agnieszka Słowik, et al.
0

Feed-forward neural networks consist of a sequence of layers, in which each layer performs some processing on the information from the previous layer. A downside to this approach is that each layer (or module, as multiple modules can operate in parallel) is tasked with processing the entire hidden state, rather than a particular part of the state which is most relevant for that module. Methods which only operate on a small number of input variables are an essential part of most programming languages, and they allow for improved modularity and code re-usability. Our proposed method, Neural Function Modules (NFM), aims to introduce the same structural capability into deep learning. Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems. The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm which, as we show, improves the results in standard classification, out-of-domain generalization, generative modeling, and learning representations in the context of reinforcement learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2017

Transfer entropy-based feedback improves performance in artificial neural networks

The structure of the majority of modern deep neural networks is characte...
research
01/02/2017

Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs by Selective Execution

We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-for...
research
09/17/2015

Some Theorems for Feed Forward Neural Networks

In this paper we introduce a new method which employs the concept of "Or...
research
05/07/2020

Lifted Regression/Reconstruction Networks

In this work we propose lifted regression/reconstruction networks (LRRNs...
research
02/10/2022

Decomposing neural networks as mappings of correlation functions

Understanding the functional principles of information processing in dee...
research
09/28/2019

Distributed Iterative Gating Networks for Semantic Segmentation

In this paper, we present a canonical structure for controlling informat...
research
07/01/2023

Sparsity-aware generalization theory for deep neural networks

Deep artificial neural networks achieve surprising generalization abilit...

Please sign up or login with your details

Forgot password? Click here to reset