Challenges and Thrills of Legal Arguments

06/06/2020
by   Anurag Pallaprolu, et al.
0

State-of-the-art attention based models, mostly centered around the transformer architecture, solve the problem of sequence-to-sequence translation using the so-called scaled dot-product attention. While this technique is highly effective for estimating inter-token attention, it does not answer the question of inter-sequence attention when we deal with conversation-like scenarios. We propose an extension, HumBERT, that attempts to perform continuous contextual argument generation using locally trained transformers.

READ FULL TEXT
research
03/26/2021

Turning transformer attention weights into zero-shot sequence labelers

We demonstrate how transformer-based models can be redesigned in order t...
research
11/06/2022

Parallel Attention Forcing for Machine Translation

Attention-based autoregressive models have achieved state-of-the-art per...
research
07/10/2022

Horizontal and Vertical Attention in Transformers

Transformers are built upon multi-head scaled dot-product attention and ...
research
09/30/2020

Learning Hard Retrieval Cross Attention for Transformer

The Transformer translation model that based on the multi-head attention...
research
05/23/2018

Self-Attention-Based Message-Relevant Response Generation for Neural Conversation Model

Using a sequence-to-sequence framework, many neural conversation models ...
research
08/23/2021

Regularizing Transformers With Deep Probabilistic Layers

Language models (LM) have grown with non-stop in the last decade, from s...
research
10/31/2018

You May Not Need Attention

In NMT, how far can we get without attention and without separate encodi...

Please sign up or login with your details

Forgot password? Click here to reset