How Good Are Large Language Models at Out-of-Distribution Detection?

08/20/2023
by   Bo Liu, et al.
0

Out-of-distribution (OOD) detection plays a vital role in enhancing the reliability of machine learning (ML) models. The emergence of large language models (LLMs) has catalyzed a paradigm shift within the ML community, showcasing their exceptional capabilities across diverse natural language processing tasks. While existing research has probed OOD detection with relative small-scale Transformers like BERT, RoBERTa and GPT-2, the stark differences in scales, pre-training objectives, and inference paradigms call into question the applicability of these findings to LLMs. This paper embarks on a pioneering empirical investigation of OOD detection in the domain of LLMs, focusing on LLaMA series ranging from 7B to 65B in size. We thoroughly evaluate commonly-used OOD detectors, scrutinizing their performance in both zero-grad and fine-tuning scenarios. Notably, we alter previous discriminative in-distribution fine-tuning into generative fine-tuning, aligning the pre-training objective of LLMs with downstream tasks. Our findings unveil that a simple cosine distance OOD detector demonstrates superior efficacy, outperforming other OOD detectors. We provide an intriguing explanation for this phenomenon by highlighting the isotropic nature of the embedding spaces of LLMs, which distinctly contrasts with the anisotropic property observed in smaller BERT family models. The new insight enhances our understanding of how LLMs detect OOD data, thereby enhancing their adaptability and reliability in dynamic environments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2021

Understanding Transformers for Bot Detection in Twitter

In this paper we shed light on the impact of fine-tuning over social med...
research
08/15/2019

Visualizing and Understanding the Effectiveness of BERT

Language model pre-training, such as BERT, has achieved remarkable resul...
research
12/31/2020

EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets

Deep, heavily overparameterized language models such as BERT, XLNet and ...
research
05/06/2023

On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code

Pre-trained language models (PLMs) have become a prevalent technique in ...
research
10/08/2020

Discriminatively-Tuned Generative Classifiers for Robust Natural Language Inference

While discriminative neural network classifiers are generally preferred,...
research
02/16/2022

Should You Mask 15

Masked language models conventionally use a masking rate of 15 belief th...
research
04/18/2023

Stochastic Parrots Looking for Stochastic Parrots: LLMs are Easy to Fine-Tune and Hard to Detect with other LLMs

The self-attention revolution allowed generative language models to scal...

Please sign up or login with your details

Forgot password? Click here to reset