Translating Mathematical Formula Images to LaTeX Sequences Using Deep Neural Networks with Sequence-level Training

08/29/2019
by   Zelun Wang, et al.
35

In this paper we propose a deep neural network model with an encoder-decoder architecture that translates images of math formulas into their LaTeX markup sequences. The encoder is a convolutional neural network (CNN) that transforms images into a group of feature maps. To better capture the spatial relationships of math symbols, the feature maps are augmented with 2D positional encoding before being unfolded into a vector. The decoder is a stacked bidirectional long short-term memory (LSTM) model integrated with soft attention mechanism, which works as a language model to translate the encoder output into a sequence of LaTeX tokens. The neural network is trained in two steps. The first step is token-level training using the Maximum-Likelihood Estimation (MLE) as the objective function. Next, at completion of the token-level training, we advance to a sequence-level training based on a new objective function inspired by the policy gradient algorithm from reinforcement learning. Our design also overcomes the exposure bias problem by closing the feedback loop in the decoder during sequence-level training, i.e., feeding in the predicted token instead of the ground truth token at every time step. The model is trained and evaluated on the IM2LATEX-100K dataset and shows state-of-the-art performance on both sequence-based and image-based evaluation metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/11/2018

Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction

Current state-of-the-art machine translation systems are based on encode...
research
05/14/2018

Token-level and sequence-level loss smoothing for RNN language models

Despite the effectiveness of recurrent neural network language models, t...
research
12/23/2020

ConvMath: A Convolutional Sequence Network for Mathematical Expression Recognition

Despite the recent advances in optical character recognition (OCR), math...
research
08/24/2021

Reducing Exposure Bias in Training Recurrent Neural Network Transducers

When recurrent neural network transducers (RNNTs) are trained using the ...
research
03/06/2019

Hybrid LSTM and Encoder-Decoder Architecture for Detection of Image Forgeries

With advanced image journaling tools, one can easily alter the semantic ...
research
07/19/2018

Sequence to Logic with Copy and Cache

Generating logical form equivalents of human language is a fresh way to ...
research
07/06/2020

Including Image-based Perception in Disturbance Observer for Warehouse Drones

Grasping and releasing objects would cause oscillations to delivery dron...

Please sign up or login with your details

Forgot password? Click here to reset