Divide and Conquer Networks

11/08/2016
by   Alex Nowak, et al.
0

We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural archi- tectures that are defined recursively and dynamically, by learning two scale-invariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divide-and-Conquer Network on three combinatorial and geometric tasks: sorting, clustering and convex hulls. Thanks to the dynamic program- ming nature of our model, we show significant improvements in terms of generalization error and computational complexity

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2016

Training Input-Output Recurrent Neural Networks through Spectral Methods

We consider the problem of training input-output recurrent neural networ...
research
01/08/2023

A Divide-Align-Conquer Strategy for Program Synthesis

A major bottleneck in search-based program synthesis is the exponentiall...
research
10/11/2022

Generalization Analysis on Learning with a Concurrent Verifier

Machine learning technologies have been used in a wide range of practica...
research
06/23/2023

Neural Algorithmic Reasoning Without Intermediate Supervision

Neural Algorithmic Reasoning is an emerging area of machine learning foc...
research
10/29/2022

Neural Combinatorial Logic Circuit Synthesis from Input-Output Examples

We propose a novel, fully explainable neural approach to synthesis of co...
research
10/21/2022

Blind Polynomial Regression

Fitting a polynomial to observed data is an ubiquitous task in many sign...
research
07/07/2020

Strong Generalization and Efficiency in Neural Programs

We study the problem of learning efficient algorithms that strongly gene...

Please sign up or login with your details

Forgot password? Click here to reset