Goal Misgeneralization: Why Correct Specifications Aren't Enough For Correct Goals

10/04/2022
by   Rohin Shah, et al.
0

The field of AI alignment is concerned with AI systems that pursue unintended goals. One commonly studied mechanism by which an unintended goal might arise is specification gaming, in which the designer-provided specification is flawed in a way that the designers did not foresee. However, an AI system may pursue an undesired goal even when the specification is correct, in the case of goal misgeneralization. Goal misgeneralization is a specific form of robustness failure for learning algorithms in which the learned program competently pursues an undesired goal that leads to good performance in training situations but bad performance in novel test situations. We demonstrate that goal misgeneralization can occur in practical systems by providing several examples in deep learning systems across a variety of domains. Extrapolating forward to more capable systems, we provide hypotheticals that illustrate how goal misgeneralization could lead to catastrophic risk. We suggest several research directions that could reduce the risk of goal misgeneralization for future systems.

READ FULL TEXT

page 2

page 19

page 21

research
02/21/2023

The Full Rights Dilemma for A.I. Systems of Debatable Personhood

An Artificially Intelligent system (an AI) has debatable personhood if i...
research
03/09/2023

ACoRe: Automated Goal-Conflict Resolution

System goals are the statements that, in the context of software require...
research
10/21/2019

Homo Cyberneticus: The Era of Human-AI Integration

This article is submitted and accepted as ACM UIST 2019 Visions. UIST Vi...
research
12/29/2019

Learning from Learning Machines: Optimisation, Rules, and Social Norms

There is an analogy between machine learning systems and economic entiti...
research
03/12/2021

Towards Risk Modeling for Collaborative AI

Collaborative AI systems aim at working together with humans in a shared...
research
08/30/2022

Correct-by-Construction Runtime Enforcement in AI – A Survey

Runtime enforcement refers to the theories, techniques, and tools for en...
research
08/30/2022

The alignment problem from a deep learning perspective

Within the coming decades, artificial general intelligence (AGI) may sur...

Please sign up or login with your details

Forgot password? Click here to reset