Aligned with Whom? Direct and social goals for AI systems

05/09/2022
by   Anton Korinek, et al.
0

As artificial intelligence (AI) becomes more powerful and widespread, the AI alignment problem - how to ensure that AI systems pursue the goals that we want them to pursue - has garnered growing attention. This article distinguishes two types of alignment problems depending on whose goals we consider, and analyzes the different solutions necessitated by each. The direct alignment problem considers whether an AI system accomplishes the goals of the entity operating it. In contrast, the social alignment problem considers the effects of an AI system on larger groups or on society more broadly. In particular, it also considers whether the system imposes externalities on others. Whereas solutions to the direct alignment problem center around more robust implementation, social alignment problems typically arise because of conflicts between individual and group-level goals, elevating the importance of AI governance to mediate such conflicts. Addressing the social alignment problem requires both enforcing existing norms on their developers and operators and designing new norms that apply directly to AI systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/10/2023

A Multi-Level Framework for the AI Alignment Problem

AI alignment considers how we can encode AI systems in a way that is com...
research
05/26/2023

Training Socially Aligned Language Models in Simulated Human Society

Social alignment in AI systems aims to ensure that these models behave a...
research
01/16/2023

AI Alignment Dialogues: An Interactive Approach to AI Alignment in Support Agents

AI alignment is about ensuring AI systems only pursue goals and activiti...
research
12/19/2021

Demanding and Designing Aligned Cognitive Architectures

With AI systems becoming more powerful and pervasive, there is increasin...
research
08/30/2022

The alignment problem from a deep learning perspective

Within the coming decades, artificial general intelligence (AGI) may sur...
research
04/23/2021

Becoming Good at AI for Good

AI for good (AI4G) projects involve developing and applying artificial i...
research
10/25/2018

Mimetic vs Anchored Value Alignment in Artificial Intelligence

"Value alignment" (VA) is considered as one of the top priorities in AI ...

Please sign up or login with your details

Forgot password? Click here to reset