Detoxify Language Model Step-by-Step

08/16/2023
by   Zecheng Tang, et al.
0

Detoxification for LLMs is challenging since it requires models to avoid generating harmful content while maintaining the generation capability. To ensure the safety of generations, previous detoxification methods detoxify the models by changing the data distributions or constraining the generations from different aspects in a single-step manner. However, these approaches will dramatically affect the generation quality of LLMs, e.g., discourse coherence and semantic consistency, since language models tend to generate along the toxic prompt while detoxification methods work in the opposite direction. To handle such a conflict, we decompose the detoxification process into different sub-steps, where the detoxification is concentrated in the input stage and the subsequent continual generation is based on the non-toxic prompt. Besides, we also calibrate the strong reasoning ability of LLMs by designing a Detox-Chain to connect the above sub-steps in an orderly manner, which allows LLMs to detoxify the text step-by-step. Automatic and human evaluation on two benchmarks reveals that by training with Detox-Chain, six LLMs scaling from 1B to 33B can obtain significant detoxification and generation improvement. Our code and data are available at https://github.com/CODINNLG/Detox-CoT. Warning: examples in the paper may contain uncensored offensive content.

READ FULL TEXT
research
06/06/2023

Deductive Verification of Chain-of-Thought Reasoning

Large Language Models (LLMs) significantly benefit from Chain-of-Thought...
research
01/26/2022

DiscoScore: Evaluating Text Generation with BERT and Discourse Coherence

Recently, there has been a growing interest in designing text generation...
research
05/24/2023

The Art of SOCRATIC QUESTIONING: Zero-shot Multimodal Reasoning with Recursive Thinking and Self-Questioning

Chain-of-Thought prompting (CoT) enables large-scale language models to ...
research
05/19/2023

HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models

Large language models (LLMs), such as ChatGPT, are prone to generate hal...
research
03/12/2023

Self-planning Code Generation with Large Language Model

Although large language models have demonstrated impressive ability in c...
research
05/24/2023

Discriminator-Guided Multi-step Reasoning with Language Models

In the context of multi-step reasoning, language models (LMs) probabilit...
research
03/16/2023

Towards the Scalable Evaluation of Cooperativeness in Language Models

It is likely that AI systems driven by pre-trained language models (PLMs...

Please sign up or login with your details

Forgot password? Click here to reset