Unsupervised Contrast-Consistent Ranking with Language Models

09/13/2023
by   Niklas Stoehr, et al.
0

Language models contain ranking-based knowledge and are powerful solvers of in-context ranking tasks. For instance, they may have parametric knowledge about the ordering of countries by size or may be able to rank reviews by sentiment. Recent work focuses on pairwise, pointwise, and listwise prompting techniques to elicit a language model's ranking knowledge. However, we find that even with careful calibration and constrained decoding, prompting-based techniques may not always be self-consistent in the rankings they produce. This motivates us to explore an alternative approach that is inspired by an unsupervised probing method called Contrast-Consistent Search (CCS). The idea is to train a probing model guided by a logical constraint: a model's representation of a statement and its negation must be mapped to contrastive true-false poles consistently across multiple statements. We hypothesize that similar constraints apply to ranking tasks where all items are related via consistent pairwise or listwise comparisons. To this end, we extend the binary CCS method to Contrast-Consistent Ranking (CCR) by adapting existing ranking methods such as the Max-Margin Loss, Triplet Loss, and Ordinal Regression objective. Our results confirm that, for the same language model, CCR probing outperforms prompting and even performs on a par with prompting much larger language models.

READ FULL TEXT

page 7

page 11

research
01/16/2023

PromptShots at the FinNLP-2022 ERAI Tasks: Pairwise Comparison and Unsupervised Ranking

This report describes our PromptShots submissions to a shared task on Ev...
research
09/19/2019

ASU at TextGraphs 2019 Shared Task: Explanation ReGeneration using Language Models and Iterative Re-Ranking

In this work we describe the system from Natural Language Processing gro...
research
03/25/2021

Deep Similarity Learning for Sports Team Ranking

Sports data is more readily available and consequently, there has been a...
research
10/02/2019

The merits of Universal Language Model Fine-tuning for Small Datasets – a case with Dutch book reviews

We evaluated the effectiveness of using language models, that were pre-t...
research
01/08/2023

InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers

We carried out a reproducibility study of InPars recipe for unsupervised...
research
10/28/2022

Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models

Fully-parametric language models generally require a huge number of mode...
research
07/06/2023

PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations

Nowadays, the quality of responses generated by different modern large l...

Please sign up or login with your details

Forgot password? Click here to reset