Enhancing Out-of-Distribution Detection in Natural Language Understanding via Implicit Layer Ensemble

10/20/2022
by   Hyunsoo Cho, et al.
2

Out-of-distribution (OOD) detection aims to discern outliers from the intended data distribution, which is crucial to maintaining high reliability and a good user experience. Most recent studies in OOD detection utilize the information from a single representation that resides in the penultimate layer to determine whether the input is anomalous or not. Although such a method is straightforward, the potential of diverse information in the intermediate layers is overlooked. In this paper, we propose a novel framework based on contrastive learning that encourages intermediate features to learn layer-specialized representations and assembles them implicitly into a single representation to absorb rich information in the pre-trained language model. Extensive experiments in various intent classification and OOD datasets demonstrate that our approach is significantly more effective than other works.

READ FULL TEXT
research
11/05/2019

Deepening Hidden Representations from Pre-trained Language Models for Natural Language Understanding

Transformer-based pre-trained language models have proven to be effectiv...
research
09/13/2022

Don't Judge a Language Model by Its Last Layer: Contrastive Learning with Layer-Wise Attention Pooling

Recent pre-trained language models (PLMs) achieved great success on many...
research
12/11/2022

Feature-Level Debiased Natural Language Understanding

Natural language understanding (NLU) models often rely on dataset biases...
research
04/30/2021

MOOD: Multi-level Out-of-distribution Detection

Out-of-distribution (OOD) detection is essential to prevent anomalous in...
research
11/21/2022

Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text

Self-supervised representation learning has proved to be a valuable comp...
research
05/06/2022

Prompt Distribution Learning

We present prompt distribution learning for effectively adapting a pre-t...
research
06/02/2023

A Simple yet Effective Self-Debiasing Framework for Transformer Models

Current Transformer-based natural language understanding (NLU) models he...

Please sign up or login with your details

Forgot password? Click here to reset