Logic and the 2-Simplicial Transformer

09/02/2019
by   James Clift, et al.
4

We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.

READ FULL TEXT

page 5

page 6

page 20

page 21

page 22

page 23

research
10/15/2019

Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving

We incorporate Tensor-Product Representations within the Transformer in ...
research
02/19/2023

Learning Language Representations with Logical Inductive Bias

Transformer architectures have achieved great success in solving natural...
research
04/04/2021

Conversational Question Answering over Knowledge Graphs with Transformer and Graph Attention Networks

This paper addresses the task of (complex) conversational question answe...
research
07/25/2021

H-Transformer-1D: Fast One-Dimensional Hierarchical Attention for Sequences

We describe an efficient hierarchical method to compute attention in the...
research
06/02/2021

Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization

Abstractive summarization, the task of generating a concise summary of i...
research
12/30/2022

Transformer in Transformer as Backbone for Deep Reinforcement Learning

Designing better deep networks and better reinforcement learning (RL) al...
research
11/10/2021

Attention Approximates Sparse Distributed Memory

While Attention has come to be an important mechanism in deep learning, ...

Please sign up or login with your details

Forgot password? Click here to reset