Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits

01/01/2023
by   Ruibo Liu, et al.
18

We present Second Thought, a new learning paradigm that enables language models (LMs) to re-align with human values. By modeling the chain-of-edits between value-unaligned and value-aligned text, with LM fine-tuning and additional refinement through reinforcement learning, Second Thought not only achieves superior performance in three value alignment benchmark datasets but also shows strong human-value transfer learning ability in few-shot scenarios. The generated editing steps also offer better interpretability and ease for interactive error correction. Extensive human evaluations further confirm its effectiveness.

READ FULL TEXT
research
10/14/2022

Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values

Many NLP classification tasks, such as sexism/racism detection or toxici...
research
05/09/2023

Fine-tuning Language Models with Generative Adversarial Feedback

Reinforcement Learning with Human Feedback (RLHF) has been demonstrated ...
research
05/26/2023

Training Socially Aligned Language Models in Simulated Human Society

Social alignment in AI systems aims to ensure that these models behave a...
research
09/20/2023

XATU: A Fine-grained Instruction-based Benchmark for Explainable Text Updates

Text editing is a crucial task that involves modifying text to better al...
research
05/04/2023

Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision

Recent AI-assistant agents, such as ChatGPT, predominantly rely on super...
research
05/30/2023

Strategic Reasoning with Language Models

Strategic reasoning enables agents to cooperate, communicate, and compet...
research
09/09/2023

FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning

Learning paradigms for large language models (LLMs) currently tend to fa...

Please sign up or login with your details

Forgot password? Click here to reset