Heroes, Villains, and Victims, and GPT-3: Automated Extraction of Character Roles Without Training Data

05/16/2022
by   Dominik Stammbach, et al.
0

This paper shows how to use large-scale pre-trained language models to extract character roles from narrative texts without training data. Queried with a zero-shot question-answering prompt, GPT-3 can identify the hero, villain, and victim in diverse domains: newspaper articles, movie plot summaries, and political speeches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/04/2022

Applying Multilingual Models to Question Answering (QA)

We study the performance of monolingual and multilingual language models...
research
03/12/2021

Cooperative Learning of Zero-Shot Machine Reading Comprehension

Pretrained language models have significantly improved the performance o...
research
10/25/2022

IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models

We introduce a new open information extraction (OIE) benchmark for pre-t...
research
05/19/2023

Evaluation of medium-large Language Models at zero-shot closed book generative question answering

Large language models (LLMs) have garnered significant attention, but th...
research
05/10/2020

How Context Affects Language Models' Factual Predictions

When pre-trained on large unsupervised textual corpora, language models ...
research
08/06/2021

Towards Zero-shot Language Modeling

Can we construct a neural model that is inductively biased towards learn...
research
10/17/2022

Zero-Shot Ranking Socio-Political Texts with Transformer Language Models to Reduce Close Reading Time

We approach the classification problem as an entailment problem and appl...

Please sign up or login with your details

Forgot password? Click here to reset