GLEU Without Tuning

05/09/2016
by   Courtney Napoles, et al.
0

The GLEU metric was proposed for evaluating grammatical error corrections using n-gram overlap with a set of reference sentences, as opposed to precision/recall of specific annotated errors (Napoles et al., 2015). This paper describes improvements made to the GLEU metric that address problems that arise when using an increasing number of reference sets. Unlike the originally presented metric, the modified metric does not require tuning. We recommend that this version be used instead of the original version.

READ FULL TEXT

page 1

page 2

page 3

research
03/03/2023

Codes with Weighted Poset Block Metrics

Weighted poset block metric is a generalization of weighted poset metric...
research
12/21/2019

Calibration and reference simulations for the auditory periphery model of Verhulst et al. 2018 version 1.2

This document describes a comprehensive procedure of how the biophysical...
research
07/10/2019

Video Distortion Method for VMAF Quality Values Increasing

Video quality measurement takes an important role in many applications. ...
research
08/10/2015

Removing Biases from Trainable MT Metrics by Using Self-Training

Most trainable machine translation (MT) metrics train their weights on h...
research
12/23/2022

Finetuning for Sarcasm Detection with a Pruned Dataset

Sarcasm is a form of irony that involves saying or writing something tha...
research
05/26/2019

SemBleu: A Robust Metric for AMR Parsing Evaluation

Evaluating AMR parsing accuracy involves comparing pairs of AMR graphs. ...
research
06/08/2010

The Deterministic Dendritic Cell Algorithm

The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inall...

Please sign up or login with your details

Forgot password? Click here to reset