Zero-Shot Ranking Socio-Political Texts with Transformer Language Models to Reduce Close Reading Time

10/17/2022
by   Kiymet Akdemir, et al.
0

We approach the classification problem as an entailment problem and apply zero-shot ranking to socio-political texts. Documents that are ranked at the top can be considered positively classified documents and this reduces the close reading time for the information extraction process. We use Transformer Language Models to get the entailment probabilities and investigate different types of queries. We find that DeBERTa achieves higher mean average precision scores than RoBERTa and when declarative form of the class label is used as a query, it outperforms dictionary definition of the class label. We show that one can reduce the close reading time by taking some percentage of the ranked documents that the percentage depends on how much recall they want to achieve. However, our findings also show that percentage of the documents that should be read increases as the topic gets broader.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/30/2022

Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task

Recent work has shown that language models scaled to billions of paramet...
research
05/03/2023

Zero-Shot Listwise Document Reranking with a Large Language Model

Supervised ranking methods based on bi-encoder or cross-encoder architec...
research
05/23/2023

Navigating Prompt Complexity for Zero-Shot Classification: A Study of Large Language Models in Computational Social Science

Instruction-tuned Large Language Models (LLMs) have exhibited impressive...
research
10/29/2022

Beyond prompting: Making Pre-trained Language Models Better Zero-shot Learners by Clustering Representations

Recent work has demonstrated that pre-trained language models (PLMs) are...
research
04/22/2022

Zero and Few-shot Learning for Author Profiling

Author profiling classifies author characteristics by analyzing how lang...
research
05/16/2022

Heroes, Villains, and Victims, and GPT-3: Automated Extraction of Character Roles Without Training Data

This paper shows how to use large-scale pre-trained language models to e...
research
11/25/2021

Near-Zero-Shot Suggestion Mining with a Little Help from WordNet

In this work, we explore the constructive side of online reviews: advice...

Please sign up or login with your details

Forgot password? Click here to reset