The Extractive-Abstractive Axis: Measuring Content "Borrowing" in Generative Language Models

07/20/2023
by   Nedelina Teneva, et al.
0

Generative language models produce highly abstractive outputs by design, in contrast to extractive responses in search engines. Given this characteristic of LLMs and the resulting implications for content Licensing Attribution, we propose the the so-called Extractive-Abstractive axis for benchmarking generative models and highlight the need for developing corresponding metrics, datasets and annotation guidelines. We limit our discussion to the text modality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2022

RARR: Researching and Revising What Language Models Say, Using Language Models

Language models (LMs) now excel at many tasks such as few-shot learning,...
research
01/10/2023

Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations

Generative language models have improved drastically, and can now produc...
research
05/27/2023

The Curse of Recursion: Training on Generated Data Makes Models Forget

Stable Diffusion revolutionised image creation from descriptive text. GP...
research
06/09/2023

Trapping LLM Hallucinations Using Tagged Context Prompts

Recent advances in large language models (LLMs), such as ChatGPT, have l...
research
02/11/2023

Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models

Despite recent progress, it has been difficult to prevent semantic hallu...
research
10/27/2020

Decentralized Attribution of Generative Models

There have been growing concerns regarding the fabrication of contents t...
research
09/15/2022

Measuring Geographic Performance Disparities of Offensive Language Classifiers

Text classifiers are applied at scale in the form of one-size-fits-all s...

Please sign up or login with your details

Forgot password? Click here to reset