Subjective Question Answering: Deciphering the inner workings of Transformers in the realm of subjectivity

06/02/2020
by   Lukas Muttenthaler, et al.
0

Understanding subjectivity demands reasoning skills beyond the realm of common knowledge. It requires a machine learning model to process sentiment and to perform opinion mining. In this work, I've exploited a recently released dataset for span-selection Question Answering, namely SubjQA. SubjQA is the first QA dataset that contains questions that ask for subjective opinions corresponding to review paragraphs from six different domains. Hence, to answer these subjective questions, a learner must extract opinions and process sentiment for various domains, and additionally, align the knowledge extracted from a paragraph with the natural language utterances in the corresponding question, which together enhance the difficulty of a QA task. The primary goal of this thesis was to investigate the inner workings (i.e., latent representations) of a Transformer-based architecture to contribute to a better understanding of these not yet well understood "black-box" models. Transformer's hidden representations, concerning the true answer span, are clustered more closely in vector space than those representations corresponding to erroneous predictions. This observation holds across the top three Transformer layers for both objective and subjective questions and generally increases as a function of layer dimensions. Moreover, the probability to achieve a high cosine similarity among hidden representations in latent space concerning the true answer span tokens is significantly higher for correct compared to incorrect answer span predictions. These results have decisive implications for down-stream applications, where it is crucial to know about why a neural network made mistakes, and in which point, in space and time the mistake has happened (e.g., to automatically predict correctness of an answer span prediction without the necessity of labeled data).

READ FULL TEXT
research
10/07/2020

Unsupervised Evaluation for Question Answering with Transformers

It is challenging to automatically evaluate the answer of a QA model at ...
research
04/29/2020

SubjQA: A Dataset for Subjectivity and Review Comprehension

Subjectivity is the expression of internal opinions or beliefs which can...
research
10/25/2016

Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems

Product review websites provide an incredible lens into the wide variety...
research
05/15/2023

MeeQA: Natural Questions in Meeting Transcripts

We present MeeQA, a dataset for natural-language question answering over...
research
10/21/2020

RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering

State-of-the-art Machine Reading Comprehension (MRC) models for Open-dom...
research
09/09/2022

Activity report analysis with automatic single or multispan answer extraction

In the era of loT (Internet of Things) we are surrounded by a plethora o...
research
11/04/2016

Learning Recurrent Span Representations for Extractive Question Answering

The reading comprehension task, that asks questions about a given eviden...

Please sign up or login with your details

Forgot password? Click here to reset