DeepAI
Log In Sign Up

Teaching Machines to Code: Neural Markup Generation with Visual Attention

02/15/2018
by   Sumeet S. Singh, et al.
0

We present a deep recurrent neural network model with soft visual attention that learns to generate LaTeX markup of real-world math formulas given their images. Applying neural sequence generation techniques that have been very successful in the fields of machine translation and image/handwriting/speech captioning, recognition, transcription and synthesis, we construct an image-to-markup model that learns to produce syntactically and semantically correct LaTeX markup code of over 150 words long and achieves a BLEU score of 89 demonstrate that the model learns to scan the image left-right / up-down much as a human would read it.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/01/2019

Sequence Modeling with Unconstrained Generation Order

The dominant approach to sequence generation is to produce a sequence in...
12/24/2014

Multiple Object Recognition with Visual Attention

We present an attention-based model for recognizing multiple objects in ...
05/28/2019

Demonstration of PerformanceNet: A Convolutional Neural Network Model for Score-to-Audio Music Generation

We present in this paper PerformacnceNet, a neural network model we prop...
09/06/2017

Towards Neural Machine Translation with Latent Tree Attention

Building models that take advantage of the hierarchical structure of lan...
05/04/2017

Recurrent Soft Attention Model for Common Object Recognition

We propose the Recurrent Soft Attention Model, which integrates the visu...
03/13/2021

Approximating How Single Head Attention Learns

Why do models often attend to salient words, and how does this evolve th...

Code Repositories

im2latex

Solution to im2latex request for research of openai


view repo