Learning to Predict Concept Ordering for Common Sense Generation

09/12/2023
by   Tianhui Zhang, et al.
0

Prior work has shown that the ordering in which concepts are shown to a commonsense generator plays an important role, affecting the quality of the generated sentence. However, it remains a challenge to determine the optimal ordering of a given set of concepts such that a natural sentence covering all the concepts could be generated from a pretrained generator. To understand the relationship between the ordering of the input concepts and the quality of the generated sentences, we conduct a systematic study considering multiple language models (LMs) and concept ordering strategies. We find that BART-large model consistently outperforms all other LMs considered in this study when fine-tuned using the ordering of concepts as they appear in CommonGen training data as measured using multiple evaluation metrics. Moreover, the larger GPT3-based large language models (LLMs) variants do not necessarily outperform much smaller LMs on this task, even when fine-tuned on task-specific training data. Interestingly, human annotators significantly reorder input concept sets when manually writing sentences covering those concepts, and this ordering provides the best sentence generations independently of the LM used for the generation, outperforming a probabilistic concept ordering baseline

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2023

This is not correct! Negation-aware Evaluation of Language Generation Systems

Large language models underestimate the impact of negations on how much ...
research
05/26/2022

Revisiting Generative Commonsense Reasoning: A Pre-Ordering Approach

Pre-trained models (PTMs) have lead to great improvements in natural lan...
research
12/19/2020

Lexically-constrained Text Generation through Commonsense Knowledge Extraction and Injection

Conditional text generation has been a challenging task that is yet to s...
research
05/16/2023

Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

Learning vectors that capture the meaning of concepts remains a fundamen...
research
08/05/2017

A Comparison of Neural Models for Word Ordering

We compare several language models for the word-ordering task and propos...
research
06/28/2020

Progressive Generation of Long Text

Large-scale language models pretrained on massive corpora of text, such ...
research
12/20/2022

DimonGen: Diversified Generative Commonsense Reasoning for Explaining Concept Relationships

In this paper, we propose DimonGen, which aims to generate diverse sente...

Please sign up or login with your details

Forgot password? Click here to reset