DeepAI AI Chat
Log In Sign Up

Embarrassingly Simple Performance Prediction for Abductive Natural Language Inference

02/21/2022
by   Emīls Kadiķis, et al.
University of Stuttgart
0

The task of abductive natural language inference (αnli), to decide which hypothesis is the more likely explanation for a set of observations, is a particularly difficult type of NLI. Instead of just determining a causal relationship, it requires common sense to also evaluate how reasonable an explanation is. All recent competitive systems build on top of contextualized representations and make use of transformer architectures for learning an NLI model. When somebody is faced with a particular NLI task, they need to select the best model that is available. This is a time-consuming and resource-intense endeavour. To solve this practical problem, we propose a simple method for predicting the performance without actually fine-tuning the model. We do this by testing how well the pre-trained models perform on the αnli task when just comparing sentence embeddings with cosine similarity to what the performance that is achieved when training a classifier on top of these embeddings. We show that the accuracy of the cosine similarity approach correlates strongly with the accuracy of the classification approach with a Pearson correlation coefficient of 0.65. Since the similarity computation is orders of magnitude faster to compute on a given dataset (less than a minute vs. hours), our method can lead to significant time savings in the process of model selection.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/28/2022

Comparing in context: Improving cosine similarity measures with a metric tensor

Cosine similarity is a widely used measure of the relatedness of pre-tra...
07/31/2019

Normalyzing Numeronyms -- A NLP approach

This paper presents a method to apply Natural Language Processing for no...
04/13/2021

Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT Fine-tuning

This paper presents our contribution to SemEval-2021 Task 2: Multilingua...
11/11/2019

TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection

We propose TANDA, an effective technique for fine-tuning pre-trained Tra...
05/26/2022

The Document Vectors Using Cosine Similarity Revisited

The current state-of-the-art test accuracy (97.42%) on the IMDB movie re...
06/07/2021

Generating Hypothetical Events for Abductive Inference

Abductive reasoning starts from some observations and aims at finding th...
10/25/2022

Similarity between Units of Natural Language: The Transition from Coarse to Fine Estimation

Capturing the similarities between human language units is crucial for e...