Low Impact Artificial Intelligences

05/30/2017
by   Stuart Armstrong, et al.
0

There are many goals for an AI that could become dangerous if the AI becomes superintelligent or otherwise powerful. Much work on the AI control problem has been focused on constructing AI goals that are safe even for such AIs. This paper looks at an alternative approach: defining a general concept of `low impact'. The aim is to ensure that a powerful AI which implements low impact will not modify the world extensively, even if it is given a simple or dangerous goal. The paper proposes various ways of defining and grounding low impact, and discusses methods for ensuring that the AI can still be allowed to have a (desired) impact despite the restriction. The end of the paper addresses known issues with this approach and avenues for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2023

Low impact agency: review and discussion

Powerful artificial intelligence poses an existential threat if the AI d...
research
10/31/2020

Ideal theory in AI ethics

This paper addresses the ways AI ethics research operates on an ideology...
research
03/22/2023

Democratising AI: Multiple Meanings, Goals, and Methods

Numerous parties are calling for the democratisation of AI, but the phra...
research
07/22/2021

Toward AI Assistants That Let Designers Design

AI for supporting designers needs to be rethought. It should aim to coop...
research
07/03/2023

ChatGPT is not a pocket calculator – Problems of AI-chatbots for teaching Geography

The recent success of large language models and AI chatbots such as Chat...
research
10/18/2020

The Convergence of AI code and Cortical Functioning – a Commentary

Neural nets, one of the oldest architectures for AI programming, are loo...
research
09/19/2023

Writer-Defined AI Personas for On-Demand Feedback Generation

Compelling writing is tailored to its audience. This is challenging, as ...

Please sign up or login with your details

Forgot password? Click here to reset