From Instructions to Intrinsic Human Values – A Survey of Alignment Goals for Big Models

08/23/2023
by   Jing Yao, et al.
0

Big models, exemplified by Large Language Models (LLMs), are models typically pre-trained on massive data and comprised of enormous parameters, which not only obtain significantly improved performance across diverse tasks but also present emergent capabilities absent in smaller models. However, the growing intertwining of big models with everyday human lives poses potential risks and might cause serious social harm. Therefore, many efforts have been made to align LLMs with humans to make them better follow user instructions and satisfy human preferences. Nevertheless, `what to align with' has not been fully discussed, and inappropriate alignment goals might even backfire. In this paper, we conduct a comprehensive survey of different alignment goals in existing work and trace their evolution paths to help identify the most essential goal. Particularly, we investigate related works from two perspectives: the definition of alignment goals and alignment evaluation. Our analysis encompasses three distinct levels of alignment goals and reveals a goal transformation from fundamental abilities to value orientation, indicating the potential of intrinsic human values as the alignment goal for enhanced LLMs. Based on such results, we further discuss the challenges of achieving such intrinsic value alignment and provide a collection of available resources for future research on the alignment of big models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/19/2023

CValues: Measuring the Values of Chinese Large Language Models from Safety to Responsibility

With the rapid evolution of large language models (LLMs), there is a gro...
research
10/12/2021

Are you doing what I say? On modalities alignment in ALFRED

ALFRED is a recently proposed benchmark that requires a model to complet...
research
12/22/2022

Methodological reflections for AI alignment research using human feedback

The field of artificial intelligence (AI) alignment aims to investigate ...
research
12/02/2020

Value Alignment Verification

As humans interact with autonomous agents to perform increasingly compli...
research
10/25/2018

Mimetic vs Anchored Value Alignment in Artificial Intelligence

"Value alignment" (VA) is considered as one of the top priorities in AI ...
research
12/07/2019

Learning Norms from Stories: A Prior for Value Aligned Agents

Value alignment is a property of an intelligent agent indicating that it...
research
01/01/2019

Personal Universes: A Solution to the Multi-Agent Value Alignment Problem

AI Safety researchers attempting to align values of highly capable intel...

Please sign up or login with your details

Forgot password? Click here to reset