DeepAI AI Chat
Log In Sign Up

Solving Math Word Problems with Double-Decoder Transformer

08/28/2019
by   Yuanliang Meng, et al.
UMass Lowell
0

This paper proposes a Transformer-based model to generate equations for math word problems. It achieves much better results than RNN models when copy and align mechanisms are not used, and can outperform complex copy and align RNN models. We also show that training a Transformer jointly in a generation task with two decoders, left-to-right and right-to-left, is beneficial. Such a Transformer performs better than the one with just one decoder not only because of the ensemble effect, but also because it improves the encoder training procedure. We also experiment with adding reinforcement learning to our model, showing improved performance compared to MLE training.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/26/2020

Experiments with LVT and FRE for Transformer model

In this paper, we experiment with Large Vocabulary Trick and Feature-ric...
05/21/2023

A Framework for Bidirectional Decoding: Case Study in Morphological Inflection

Transformer-based encoder-decoder models that generate outputs in a left...
08/11/2020

Transformer with Bidirectional Decoder for Speech Recognition

Attention-based models have made tremendous progress on end-to-end autom...
06/01/2023

Being Right for Whose Right Reasons?

Explainability methods are used to benchmark the extent to which model p...
09/24/2022

A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion

Cyrillic and Traditional Mongolian are the two main members of the Mongo...
04/15/2022

Streaming Align-Refine for Non-autoregressive Deliberation

We propose a streaming non-autoregressive (non-AR) decoding algorithm to...
06/02/2022

MMTM: Multi-Tasking Multi-Decoder Transformer for Math Word Problems

Recently, quite a few novel neural architectures were derived to solve m...