Structured Self-Attention Weights Encode Semantics in Sentiment Analysis

10/10/2020
by   Zhengxuan Wu, et al.
0

Neural attention, especially the self-attention made popular by the Transformer, has become the workhorse of state-of-the-art natural language processing (NLP) models. Very recent work suggests that the self-attention in the Transformer encodes syntactic information; Here, we show that self-attention scores encode semantics by considering sentiment analysis tasks. In contrast to gradient-based feature attribution methods, we propose a simple and effective Layer-wise Attention Tracing (LAT) method to analyze structured attention weights. We apply our method to Transformer models trained on two tasks that have surface dissimilarities, but share common semantics—sentiment analysis of movie reviews and time-series valence prediction in life story narratives. Across both tasks, words with high aggregated attention weights were rich in emotional semantics, as quantitatively validated by an emotion lexicon labeled by human annotators. Our results show that structured attention weights encode rich semantics in sentiment analysis, and match human interpretations of semantics.

READ FULL TEXT
research
03/12/2020

Sentiment Analysis with Contextual Embeddings and Self-Attention

In natural language the intended meaning of a word or phrase is often im...
research
12/19/2018

Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers

Sentiment Analysis has seen much progress in the past two decades. For t...
research
02/15/2021

A Koopman Approach to Understanding Sequence Neural Models

We introduce a new approach to understanding trained sequence neural mod...
research
02/11/2022

Hindi/Bengali Sentiment Analysis Using Transfer Learning and Joint Dual Input Learning with Self Attention

Sentiment Analysis typically refers to using natural language processing...
research
10/29/2019

An Efficient Model for Sentiment Analysis of Electronic Product Reviews in Vietnamese

In the past few years, the growth of e-commerce and digital marketing in...
research
04/23/2020

Self-Attention Attribution: Interpreting Information Interactions Inside Transformer

The great success of Transformer-based models benefits from the powerful...
research
08/12/2019

On the Validity of Self-Attention as Explanation in Transformer Models

Explainability of deep learning systems is a vital requirement for many ...

Please sign up or login with your details

Forgot password? Click here to reset