kNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference

03/24/2023
by   Benfeng Xu, et al.
0

In-Context Learning (ICL), which formulates target tasks as prompt completion conditioned on in-context demonstrations, has become the prevailing utilization of LLMs. In this paper, we first disclose an actual predicament for this typical usage that it can not scale up with training data due to context length restriction. Besides, existing works have shown that ICL also suffers from various biases and requires delicate calibration treatment. To address both challenges, we advocate a simple and effective solution, kNN Prompting, which first queries LLM with training data for distributed representations, then predicts test instances by simply referring to nearest neighbors. We conduct comprehensive experiments to demonstrate its two-fold superiority: 1) Calibration-Free: kNN Prompting does not directly align LLM output distribution with task-specific label space, instead leverages such distribution to align test and training instances. It significantly outperforms state-of-the-art calibration-based methods under comparable few-shot scenario. 2) Beyond-Context: kNN Prompting can further scale up effectively with as many training data as are available, continually bringing substantial improvements. The scaling trend holds across 10 orders of magnitude ranging from 2 shots to 1024 shots as well as different LLMs scales ranging from 0.8B to 30B. It successfully bridges data scaling into model scaling, and brings new potentials for the gradient-free paradigm of LLM deployment. Code is publicly available.

READ FULL TEXT
research
12/05/2022

Improving Few-Shot Performance of Language Models via Nearest Neighbor Calibration

Pre-trained language models (PLMs) have exhibited remarkable few-shot le...
research
01/02/2023

P3DC-Shot: Prior-Driven Discrete Data Calibration for Nearest-Neighbor Few-Shot Classification

Nearest-Neighbor (NN) classification has been proven as a simple and eff...
research
08/16/2023

Two Phases of Scaling Laws for Nearest Neighbor Classifiers

A scaling law refers to the observation that the test performance of a m...
research
05/23/2023

Dr.ICL: Demonstration-Retrieved In-context Learning

In-context learning (ICL), teaching a large language model (LLM) to perf...
research
10/01/2020

Nearest Neighbor Machine Translation

We introduce k-nearest-neighbor machine translation (kNN-MT), which pred...
research
10/06/2020

Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning

We present a simple few-shot named entity recognition (NER) system based...

Please sign up or login with your details

Forgot password? Click here to reset