Log In Sign Up

CIS2: A Simplified Commonsense Inference Evaluation for Story Prose

by   Bryan Li, et al.

Transformers have been showing near-human performance on a variety of tasks, but they are not without their limitations. We discuss the issue of conflating results of transformers that are instructed to do multiple tasks simultaneously. In particular, we focus on the domain of commonsense reasoning within story prose, which we call contextual commonsense inference (CCI). We look at the GLUCOSE (Mostafazadeh et al 2020) dataset and task for predicting implicit commonsense inferences between story sentences. Since the GLUCOSE task simultaneously generates sentences and predicts the CCI relation, there is a conflation in the results. Is the model really measuring CCI or is its ability to generate grammatical text carrying the results? In this paper, we introduce the task contextual commonsense inference in sentence selection (CIS^2), a simplified task that avoids conflation by eliminating language generation altogether. Our findings emphasize the necessity of future work to disentangle language generation from the desired NLP tasks at hand.


page 1

page 2

page 3

page 4


Incorporating Structured Commonsense Knowledge in Story Completion

The ability to select an appropriate story ending is the first step towa...

Incorporating Commonsense Knowledge into Story Ending Generation via Heterogeneous Graph Networks

Story ending generation is an interesting and challenging task, which ai...

Automated Storytelling via Causal, Commonsense Plot Ordering

Automated story plot generation is the task of generating a coherent seq...

A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories

Representation and learning of commonsense knowledge is one of the found...

Paragraph-Level Commonsense Transformers with Recurrent Memory

Human understanding of narrative texts requires making commonsense infer...

COINS: Dynamically Generating COntextualized Inference Rules for Narrative Story Completion

Despite recent successes of large pre-trained language models in solving...

Inferring the Reader: Guiding Automated Story Generation with Commonsense Reasoning

Transformer-based language model approaches to automated story generatio...