Feed-Forward Neural Networks Need Inductive Bias to Learn Equality Relations

12/04/2018
by   Tillman Weyde, et al.
0

Basic binary relations such as equality and inequality are fundamental to relational data structures. Neural networks should learn such relations and generalise to new unseen data. We show in this study, however, that this generalisation fails with standard feed-forward networks on binary vectors. Even when trained with maximal training data, standard networks do not reliably detect equality.We introduce differential rectifier (DR) units that we add to the network in different configurations. The DR units create an inductive bias in the networks, so that they do learn to generalise, even from small numbers of examples and we have not found any negative effect of their inclusion in the network. Given the fundamental nature of these relations, we hypothesize that feed-forward neural network learning benefits from inductive bias in other relations as well. Consequently, the further development of suitable inductive biases will be beneficial to many tasks in relational learning with neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2019

Factors for the Generalisation of Identity Relations by Neural Networks

Many researchers implicitly assume that neural networks learn relations ...
research
11/18/2022

Global quantitative robustness of regression feed-forward neural networks

Neural networks are an indispensable model class for many complex learni...
research
07/06/2023

PUFFIN: A Path-Unifying Feed-Forward Interfaced Network for Vapor Pressure Prediction

Accurately predicting vapor pressure is vital for various industrial and...
research
12/20/2014

In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning

We present experiments demonstrating that some other form of capacity co...
research
12/06/2018

Modelling Identity Rules with Neural Networks

In this paper, we show that standard feed-forward and recurrent neural n...
research
03/10/2021

Relational Weight Priors in Neural Networks for Abstract Pattern Learning and Language Modelling

Deep neural networks have become the dominant approach in natural langua...
research
12/01/2021

GANORCON: Are Generative Models Useful for Few-shot Segmentation?

Advances in generative modeling based on GANs has motivated the communit...

Please sign up or login with your details

Forgot password? Click here to reset