Not All Claims are Created Equal: Choosing the Right Approach to Assess Your Hypotheses

11/10/2019
by   Erfan Sadeqi Azer, et al.
8

Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p-value computation, which suffers from several known issues. While alternative proposals have been well-debated and adopted in other fields, they remain rarely discussed or used within the NLP community. We address this gap by contrasting various hypothesis assessment techniques, especially those not commonly used in the field (such as evaluations based on Bayesian inference). Since these statistical techniques differ in the hypotheses they can support, we argue that practitioners should first decide their target hypothesis before choosing an assessment method. This is crucial because common fallacies, misconceptions, and misinterpretation surrounding hypothesis assessment methods often stem from a discrepancy between what one would like to claim versus what the method used actually assesses. Our survey reveals that these issues are omnipresent in the NLP research community. As a step forward, we provide best practices and guidelines tailored to NLP research, as well as an easy-to-use package called 'HyBayes' for Bayesian assessment of hypotheses, complementing existing tools.

READ FULL TEXT

page 7

page 9

page 14

research
04/17/2023

Thorny Roses: Investigating the Dual Use Dilemma in Natural Language Processing

Dual use, the intentional, harmful reuse of technology and scientific ar...
research
01/28/2020

The e-value: A Fully Bayesian Significance Measure for Precise Statistical Hypotheses and its Research Program

This article gives a survey of the e-value, a statistical significance m...
research
08/07/2023

From Ambiguity to Explicitness: NLP-Assisted 5G Specification Abstraction for Formal Analysis

Formal method-based analysis of the 5G Wireless Communication Protocol i...
research
02/18/2019

Digital Humanities Readiness Assessment Framework: DHuRAF

This research suggests a framework, Digital Humanities Readiness Assessm...
research
05/11/2021

Towards transparency in NLP shared tasks

This article reports on a survey carried out across the Natural Language...
research
02/14/2022

Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text

Evaluation practices in natural language generation (NLG) have many know...
research
02/28/2021

Citizen Participation and Machine Learning for a Better Democracy

The development of democratic systems is a crucial task as confirmed by ...

Please sign up or login with your details

Forgot password? Click here to reset