Neural Execution Engines: Learning to Execute Subroutines

06/15/2020
by   Yujun Yan, et al.
5

A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms. This is evidenced by their inability to generalize to data distributions that are outside of their restricted training sets, namely larger inputs and unseen data. We study these generalization issues at the level of numerical subroutines that comprise common algorithms like sorting, shortest paths, and minimum spanning trees. First, we observe that transformer-based sequence-to-sequence models can learn subroutines like sorting a list of numbers, but their performance rapidly degrades as the length of lists grows beyond those found in the training set. We demonstrate that this is due to attention weights that lose fidelity with longer sequences, particularly when the input numbers are numerically similar. To address the issue, we propose a learned conditional masking mechanism, which enables the model to strongly generalize far outside of its training range with near-perfect accuracy on a variety of algorithms. Second, to generalize to unseen data, we show that encoding numbers with a binary representation leads to embeddings with rich structure once trained on downstream tasks like addition or multiplication. This allows the embedding to handle missing data by faithfully interpolating numbers not seen during training.

READ FULL TEXT

page 3

page 5

research
06/27/2023

Length Generalization in Arithmetic Transformers

We examine how transformers cope with two challenges: learning basic int...
research
01/13/2020

Numerical Sequence Prediction using Bayesian Concept Learning

When people learn mathematical patterns or sequences, they are able to i...
research
11/02/2016

Extensions and Limitations of the Neural GPU

The Neural GPU is a recent model that can learn algorithms such as multi...
research
07/07/2020

Strong Generalization and Efficiency in Neural Programs

We study the problem of learning efficient algorithms that strongly gene...
research
11/25/2015

Neural GPUs Learn Algorithms

Learning an algorithm from examples is a fundamental problem that has be...
research
11/19/2015

Order Matters: Sequence to sequence for sets

Sequences have become first class citizens in supervised learning thanks...
research
08/16/2023

It Ain't That Bad: Understanding the Mysterious Performance Drop in OOD Generalization for Generative Transformer Models

Generative Transformer-based models have achieved remarkable proficiency...

Please sign up or login with your details

Forgot password? Click here to reset