What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP

05/28/2020
by   Oskar Wysocki, et al.
0

SemEval is the primary venue in the NLP community for the proposal of new challenges and for the systematic empirical evaluation of NLP systems. This paper provides a systematic quantitative analysis of SemEval aiming to evidence the patterns of the contributions behind SemEval. By understanding the distribution of task types, metrics, architectures, participation and citations over time we aim to answer the question on what is being evaluated by SemEval.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/16/2021

Architectures of Meaning, A Systematic Corpus Analysis of NLP Systems

This paper proposes a novel statistical corpus analysis framework target...
research
08/16/2019

An Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing

Multi-Task Learning (MTL) aims at boosting the overall performance of ea...
research
09/25/2022

Corpus-based Metaphor Analysis through Graph Theoretical Methods

As a contribution to metaphor analysis, we introduce a statistical, data...
research
07/25/2018

Evaluating Creativity in Computational Co-Creative Systems

This paper provides a framework for evaluating creativity in co-creative...
research
10/06/2020

A Survey on Recognizing Textual Entailment as an NLP Evaluation

Recognizing Textual Entailment (RTE) was proposed as a unified evaluatio...
research
12/20/2022

Evaluation for Change

Evaluation is the central means for assessing, understanding, and commun...
research
03/31/2022

On the Evaluation of NLP-based Models for Software Engineering

NLP-based models have been increasingly incorporated to address SE probl...

Please sign up or login with your details

Forgot password? Click here to reset