Attending to Mathematical Language with Transformers

12/05/2018
by   Artit Wangperawong, et al.
0

Mathematical expressions were generated, evaluated and used to train neural network models based on the transformer architecture. The expressions and their targets were analyzed as a character-level sequence transduction task in which the encoder and decoder are built on attention mechanisms. Three models were trained to understand and evaluate symbolic variables and expressions in mathematics: (1) the self-attentive and feed-forward transformer without recurrence or convolution, (2) the universal transformer with recurrence, and (3) the adaptive universal transformer with recurrence and adaptive computation time. The models respectively achieved test accuracies as high as 76.1 and 83.9 cases inferred incorrectly, the results were very close to the targets. The models notably learned to add, subtract and multiply both positive and negative decimal numbers of variable digits assigned to symbolic variables.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2018

Universal Transformers

Self-attentive feed-forward sequence models have been shown to achieve i...
research
04/13/2021

Distilling Wikipedia mathematical knowledge into neural network models

Machine learning applications to symbolic mathematics are becoming incre...
research
03/28/2020

Variational Transformers for Diverse Response Generation

Despite the great promise of Transformers in many sequence modeling task...
research
10/07/2021

Pretrained Language Models are Symbolic Mathematics Solvers too!

Solving symbolic mathematics has always been of in the arena of human in...
research
06/13/2021

Thinking Like Transformers

What is the computational model behind a Transformer? Where recurrent ne...
research
06/11/2021

Zero-Shot Controlled Generation with Encoder-Decoder Transformers

Controlling neural network-based models for natural language generation ...
research
05/21/2023

A Symbolic Framework for Systematic Evaluation of Mathematical Reasoning with Transformers

Whether Transformers can learn to apply symbolic rules and generalise to...

Please sign up or login with your details

Forgot password? Click here to reset