Using Foundation Models to Detect Policy Violations with Minimal Supervision

06/09/2023
by   Sid Mittal, et al.
0

Foundation models, i.e. large neural networks pre-trained on large text corpora, have revolutionized NLP. They can be instructed directly (e.g. (arXiv:2005.14165)) - this is called hard prompting - and they can be tuned using very little data (e.g. (arXiv:2104.08691)) - this technique is called soft prompting. We seek to leverage their capabilities to detect policy violations. Our contributions are: We identify a hard prompt that adapts chain-of-thought prompting to policy violation tasks. This prompt produces policy violation classifications, along with extractive explanations that justify the classification. We compose the hard-prompts with soft prompt tuning to produce a classifier that attains high accuracy with very little supervision; the same classifier also produces explanations. Though the supervision only acts on the classifications, we find that the modified explanations remain consistent with the (tuned) model's response. Along the way, we identify several unintuitive aspects of foundation models. For instance, adding an example from a specific class can actually reduce predictions of that class, and separately, the effects of tokenization on scoring etc. Based on our technical results, we identify a simple workflow for product teams to quickly develop effective policy violation detectors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2022

A Comparison of SVM against Pre-trained Language Models (PLMs) for Text Classification Tasks

The emergence of pre-trained language models (PLMs) has shown great succ...
research
04/29/2020

Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning

Fine-tuning of pre-trained transformer models has become the standard ap...
research
09/14/2023

When is a Foundation Model a Foundation Model

Recently, several studies have reported on the fine-tuning of foundation...
research
02/24/2021

Teach Me to Explain: A Review of Datasets for Explainable NLP

Explainable NLP (ExNLP) has increasingly focused on collecting human-ann...
research
05/07/2023

Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting

Large Language Models (LLMs) can achieve strong performance on many task...
research
06/16/2023

π2vec: Policy Representations with Successor Features

This paper describes π2vec, a method for representing behaviors of black...
research
03/01/2021

ToxCCIn: Toxic Content Classification with Interpretability

Despite the recent successes of transformer-based models in terms of eff...

Please sign up or login with your details

Forgot password? Click here to reset