Pointer Networks

06/09/2015
by   Oriol Vinyals, et al.
0

We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2015

A Neural Transducer

Sequence-to-sequence models have achieved impressive results on various ...
research
05/27/2019

OrderNet: Ordering by Example

In this paper we introduce a new neural architecture for sorting unorder...
research
05/08/2017

Convolutional Sequence to Sequence Learning

The prevalent approach to sequence to sequence learning maps an input se...
research
05/02/2018

Placement Delivery Array Design via Attention-Based Deep Neural Network

A decentralized coded caching scheme has been proposed by Maddah-Ali and...
research
11/19/2015

Order Matters: Sequence to sequence for sets

Sequences have become first class citizens in supervised learning thanks...
research
03/22/2018

Attention Solves Your TSP

We propose a framework for solving combinatorial optimization problems o...
research
02/27/2020

Towards Modular Algorithm Induction

We present a modular neural network architecture Main that learns algori...

Please sign up or login with your details

Forgot password? Click here to reset