Evaluation Evaluation a Monte Carlo study

04/03/2015
by   David M. W. Powers, et al.
0

Over the last decade there has been increasing concern about the biases embodied in traditional evaluation methods for Natural Language Processing/Learning, particularly methods borrowed from Information Retrieval. Without knowledge of the Bias and Prevalence of the contingency being tested, or equivalently the expectation due to chance, the simple conditional probabilities Recall, Precision and Accuracy are not meaningful as evaluation measures, either individually or in combinations such as F-factor. The existence of bias in NLP measures leads to the 'improvement' of systems by increasing their bias, such as the practice of improving tagging and parsing scores by using most common value (e.g. water is always a Noun) rather than the attempting to discover the correct one. The measures Cohen Kappa and Powers Informedness are discussed as unbiased alternative to Recall and related to the psychologically significant measure DeltaP. In this paper we will analyze both biased and unbiased measures theoretically, characterizing the precise relationship between all these measures as well as evaluating the evaluation measures themselves empirically using a Monte Carlo simulation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2021

What do Bias Measures Measure?

Natural Language Processing (NLP) models propagate social biases about p...
research
02/27/2013

Efficient Estimation of the Value of Information in Monte Carlo Models

The expected value of information (EVI) is the most powerful measure of ...
research
10/11/2020

Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation

Commonly used evaluation measures including Recall, Precision, F-Measure...
research
03/14/2022

Sense Embeddings are also Biased–Evaluating Social Biases in Static and Contextualised Sense Embeddings

Sense embedding learning methods learn different embeddings for the diff...
research
05/15/2023

How to estimate Fisher information matrices from simulations

The Fisher information matrix is a quantity of fundamental importance fo...
research
08/23/2017

Evaluation Measures for Relevance and Credibility in Ranked Lists

Recent discussions on alternative facts, fake news, and post truth polit...
research
11/16/2019

Effectively Unbiased FID and Inception Score and where to find them

This paper shows that two commonly used evaluation metrics for generativ...

Please sign up or login with your details

Forgot password? Click here to reset