Modularity Matters: Learning Invariant Relational Reasoning Tasks

06/18/2018
by   Jason Jo, et al.
0

We focus on two supervised visual reasoning tasks whose labels encode a semantic relational rule between two or more objects in an image: the MNIST Parity task and the colorized Pentomino task. The objects in the images undergo random translation, scaling, rotation and coloring transformations. Thus these tasks involve invariant relational reasoning. We report uneven performance of various deep CNN models on these two tasks. For the MNIST Parity task, we report that the VGG19 model soundly outperforms a family of ResNet models. Moreover, the family of ResNet models exhibits a general sensitivity to random initialization for the MNIST Parity task. For the colorized Pentomino task, now both the VGG19 and ResNet models exhibit sluggish optimization and very poor test generalization, hovering around 30 learn hierarchies of fully distributed features and thus encode the distributed representation prior. We are motivated by a hypothesis from cognitive neuroscience which posits that the human visual cortex is modularized, and this allows the visual cortex to learn higher order invariances. To this end, we consider a modularized variant of the ResNet model, referred to as a Residual Mixture Network (ResMixNet) which employs a mixture-of-experts architecture to interleave distributed representations with more specialized, modular representations. We show that very shallow ResMixNets are capable of learning each of the two tasks well, attaining less than 2 MNIST Parity and the colorized Pentomino tasks respectively. Most importantly, the ResMixNet models are extremely parameter efficient: generalizing better than various non-modular CNNs that have over 10x the number of parameters. These experimental results support the hypothesis that modularity is a robust prior for learning invariant relational reasoning.

READ FULL TEXT

page 3

page 4

research
05/24/2019

An Explicitly Relational Neural Network Architecture

With a view to bridging the gap between deep learning and symbolic AI, w...
research
05/28/2023

In-Context Analogical Reasoning with Pre-Trained Language Models

Analogical reasoning is a fundamental capacity of human cognition that a...
research
05/10/2019

Language-Conditioned Graph Networks for Relational Reasoning

Solving grounded language tasks often requires reasoning about relations...
research
12/07/2017

RelNN: A Deep Neural Model for Relational Learning

Statistical relational AI (StarAI) aims at reasoning and learning in noi...
research
07/23/2021

Constellation: Learning relational abstractions over objects for compositional imagination

Learning structured representations of visual scenes is currently a majo...
research
02/04/2015

Learning Local Invariant Mahalanobis Distances

For many tasks and data types, there are natural transformations to whic...
research
06/03/2019

Deep Reasoning Networks: Thinking Fast and Slow

We introduce Deep Reasoning Networks (DRNets), an end-to-end framework t...

Please sign up or login with your details

Forgot password? Click here to reset