Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving

10/15/2019
by   Imanol Schlag, et al.
8

We incorporate Tensor-Product Representations within the Transformer in order to better support the explicit representation of relation structure. Our Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the recently-introduced Mathematics Dataset containing 56 categories of free-form math word-problems. The essential component of the model is a novel attention mechanism, called TP-Attention, which explicitly encodes the relations between each Transformer cell and the other cells from which values have been retrieved by attention. TP-Attention goes beyond linear combination of retrieved values, strengthening representation-building and resolving ambiguities introduced by multiple layers of standard attention. The TP-Transformer's attention maps give better insights into how it is capable of solving the Mathematics Dataset's challenging problems. Pretrained models and code will be made available after publication.

READ FULL TEXT

page 7

page 8

research
09/02/2019

Logic and the 2-Simplicial Transformer

We introduce the 2-simplicial Transformer, an extension of the Transform...
research
10/31/2019

Attention Is All You Need for Chinese Word Segmentation

This paper presents a fast and accurate Chinese word segmentation (CWS) ...
research
05/19/2023

Graph Propagation Transformer for Graph Representation Learning

This paper presents a novel transformer architecture for graph represent...
research
07/10/2022

Horizontal and Vertical Attention in Transformers

Transformers are built upon multi-head scaled dot-product attention and ...
research
06/07/2019

Analyzing the Structure of Attention in a Transformer Language Model

The Transformer is a fully attention-based alternative to recurrent netw...
research
09/30/2021

SCIMAT: Science and Mathematics Dataset

In this work, we announce a comprehensive well curated and opensource da...
research
02/03/2023

Coinductive guide to inductive transformer heads

We argue that all building blocks of transformer models can be expressed...

Please sign up or login with your details

Forgot password? Click here to reset