Aligning Artificial Intelligence with Humans through Public Policy

06/25/2022
by   John Nay, et al.
0

Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens' preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to any given publicly traded company and its likely effect on that company. We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2022

The dangers in algorithms learning humans' values and irrationalities

For an artificial intelligence (AI) to be aligned with human values (or ...
research
08/30/2022

The alignment problem from a deep learning perspective

Within the coming decades, artificial general intelligence (AGI) may sur...
research
12/22/2022

Methodological reflections for AI alignment research using human feedback

The field of artificial intelligence (AI) alignment aims to investigate ...
research
12/19/2021

Demanding and Designing Aligned Cognitive Architectures

With AI systems becoming more powerful and pervasive, there is increasin...
research
09/14/2022

Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

We are currently unable to specify human goals and societal values in a ...
research
02/11/2019

Safe Artificial General Intelligence via Distributed Ledger Technology

Background. Expert observers and artificial intelligence (AI) progressio...
research
09/10/2023

Decolonial AI Alignment: Viśesadharma, Argument, and Artistic Expression

Prior work has explicated the coloniality of artificial intelligence (AI...

Please sign up or login with your details

Forgot password? Click here to reset