DeepAI AI Chat
Log In Sign Up

CNRL at SemEval-2020 Task 5: Modelling Causal Reasoning in Language with Multi-Head Self-Attention Weights based Counterfactual Detection

05/31/2020
by   Rajaswa Patil, et al.
BITS Pilani
0

In this paper, we describe an approach for modelling causal reasoning in natural language by detecting counterfactuals in text using multi-head self-attention weights. We use pre-trained transformer models to extract contextual embeddings and self-attention weights from the text. We show the use of convolutional layers to extract task-specific features from these self-attention weights. Further, we describe a fine-tuning approach with a common base model for knowledge sharing between the two closely related sub-tasks for counterfactual detection. We analyze and compare the performance of various transformer models in our experiments. Finally, we perform a qualitative analysis with the multi-head self-attention weights to interpret our models' dynamics.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/02/2019

Multi-Scale Self-Attention for Text Classification

In this paper, we introduce the prior knowledge, multi-scale structure, ...
10/09/2022

KSAT: Knowledge-infused Self Attention Transformer – Integrating Multiple Domain-Specific Contexts

Domain-specific language understanding requires integrating multiple pie...
11/11/2019

A hybrid text normalization system using multi-head self-attention for mandarin

In this paper, we propose a hybrid text normalization system using multi...
01/14/2021

Interpretable Multi-Head Self-Attention model for Sarcasm Detection in social media

Sarcasm is a linguistic expression often used to communicate the opposit...
07/09/2022

Attention and Self-Attention in Random Forests

New models of random forests jointly using the attention and self-attent...
10/09/2022

Fine-Tuning Pre-trained Transformers into Decaying Fast Weights

Autoregressive Transformers are strong language models but incur O(T) co...