Systematic Rectification of Language Models via Dead-end Analysis

02/27/2023
by   Meng Cao, et al.
0

With adversarial or otherwise normal prompts, existing large language models (LLM) can be pushed to generate toxic discourses. One way to reduce the risk of LLMs generating undesired discourses is to alter the training of the LLM. This can be very restrictive due to demanding computation requirements. Other methods rely on rule-based or prompt-based token elimination, which are limited as they dismiss future tokens and the overall meaning of the complete discourse. Here, we center detoxification on the probability that the finished discourse is ultimately considered toxic. That is, at each point, we advise against token selections proportional to how likely a finished text from this point will be toxic. To this end, we formally extend the dead-end theory from the recent reinforcement learning (RL) literature to also cover uncertain outcomes. Our approach, called rectification, utilizes a separate but significantly smaller model for detoxification, which can be applied to diverse LLMs as long as they share the same vocabulary. Importantly, our method does not require access to the internal representations of the LLM, but only the token probability distribution at each decoding step. This is crucial as many LLMs today are hosted in servers and only accessible through APIs. When applied to various LLMs, including GPT-3, our approach significantly improves the generated discourse compared to the base LLMs and other techniques in terms of both the overall language and detoxification performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/19/2022

Dependency-based Mixture Language Models

Various models have been proposed to incorporate knowledge of syntactic ...
research
08/25/2021

Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens

Standard pretrained language models operate on sequences of subword toke...
research
07/18/2023

Text vectorization via transformer-based language models and n-gram perplexities

As the probability (and thus perplexity) of a text is calculated based o...
research
09/10/2021

Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations

Current language models are usually trained using a self-supervised sche...
research
10/13/2020

Corruption Is Not All Bad: Incorporating Discourse Structure into Pre-training via Corruption for Essay Scoring

Existing approaches for automated essay scoring and document representat...
research
07/17/2023

A mixed policy to improve performance of language models on math problems

When to solve math problems, most language models take a sampling strate...
research
05/16/2023

SpecInfer: Accelerating Generative LLM Serving with Speculative Inference and Token Tree Verification

The high computational and memory requirements of generative large langu...

Please sign up or login with your details

Forgot password? Click here to reset