DynaSent: A Dynamic Benchmark for Sentiment Analysis

by   Christopher Potts, et al.

We introduce DynaSent ('Dynamic Sentiment'), a new English-language benchmark task for ternary (positive/negative/neutral) sentiment analysis. DynaSent combines naturally occurring sentences with sentences created using the open-source Dynabench Platform, which facilities human-and-model-in-the-loop dataset creation. DynaSent has a total of 121,634 sentences, each validated by five crowdworkers, and its development and test splits are designed to produce chance performance for even the best models we have been able to develop; when future models solve this task, we will use them to create DynaSent version 2, continuing the dynamic evolution of this benchmark. Here, we report on the dataset creation effort, focusing on the steps we took to increase quality and reduce artifacts. We also present evidence that DynaSent's Neutral category is more coherent than the comparable category in other benchmarks, and we motivate training models from scratch for each round over successive fine-tuning.


page 1

page 18

page 20


YASO: A New Benchmark for Targeted Sentiment Analysis

Sentiment analysis research has shifted over the years from the analysis...

SemEval-2020 Task 9: Overview of Sentiment Analysis of Code-Mixed Tweets

In this paper, we present the results of the SemEval-2020 Task 9 on Sent...

Psychological State in Text: A Limitation of Sentiment Analysis

Starting with the idea that sentiment analysis models should be able to ...

Sentiment Recognition in Egocentric Photostreams

Lifelogging is a process of collecting rich source of information about ...

Reed at SemEval-2020 Task 9: Fine-Tuning and Bag-of-Words Approaches to Code-Mixed Sentiment Analysis

We explore the task of sentiment analysis on Hinglish (code-mixed Hindi-...

Text Understanding from Scratch

This article demontrates that we can apply deep learning to text underst...

Code Repositories


DynaSent: Dynamic Sentiment Analysis Dataset

view repo