CELDA: Leveraging Black-box Language Model as Enhanced Classifier without Labels

06/05/2023
by   Hyunsoo Cho, et al.
0

Utilizing language models (LMs) without internal access is becoming an attractive paradigm in the field of NLP as many cutting-edge LMs are released through APIs and boast a massive scale. The de-facto method in this type of black-box scenario is known as prompting, which has shown progressive performance enhancements in situations where data labels are scarce or unavailable. Despite their efficacy, they still fall short in comparison to fully supervised counterparts and are generally brittle to slight modifications. In this paper, we propose Clustering-enhanced Linear Discriminative Analysis, a novel approach that improves the text classification accuracy with a very weak-supervision signal (i.e., name of the labels). Our framework draws a precise decision boundary without accessing weights or gradients of the LM model or data labels. The core ideas of CELDA are twofold: (1) extracting a refined pseudo-labeled dataset from an unlabeled dataset, and (2) training a lightweight and robust model on the top of LM, which learns an accurate decision boundary from an extracted noisy dataset. Throughout in-depth investigations on various datasets, we demonstrated that CELDA reaches new state-of-the-art in weakly-supervised text classification and narrows the gap with a fully-supervised model. Additionally, our proposed methodology can be applied universally to any LM and has the potential to scale to larger models, making it a more viable option for utilizing large LMs.

READ FULL TEXT
research
12/20/2020

Explaining Black-box Models for Biomedical Text Classification

In this paper, we propose a novel method named Biomedical Confident Item...
research
09/02/2018

Weakly-Supervised Neural Text Classification

Deep neural networks are gaining increasing popularity for the classic t...
research
05/25/2022

LOPS: Learning Order Inspired Pseudo-Label Selection for Weakly Supervised Text Classification

Weakly supervised text classification methods typically train a deep neu...
research
01/30/2021

ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text Classification Models

Text classification is the most basic natural language processing task. ...
research
06/15/2022

Query-Adaptive Predictive Inference with Partial Labels

The cost and scarcity of fully supervised labels in statistical machine ...
research
03/24/2022

Shoring Up the Foundations: Fusing Model Embeddings and Weak Supervision

Foundation models offer an exciting new paradigm for constructing models...
research
01/16/2021

Weakly-Supervised Hierarchical Models for Predicting Persuasive Strategies in Good-faith Textual Requests

Modeling persuasive language has the potential to better facilitate our ...

Please sign up or login with your details

Forgot password? Click here to reset