Value Engineering for Autonomous Agents

02/17/2023
by   Nieves Montes, et al.
0

Machine Ethics (ME) is concerned with the design of Artificial Moral Agents (AMAs), i.e. autonomous agents capable of reasoning and behaving according to moral values. Previous approaches have treated values as labels associated with some actions or states of the world, rather than as integral components of agent reasoning. It is also common to disregard that a value-guided agent operates alongside other value-guided agents in an environment governed by norms, thus omitting the social dimension of AMAs. In this blue sky paper, we propose a new AMA paradigm grounded in moral and social psychology, where values are instilled into agents as context-dependent goals. These goals intricately connect values at individual levels to norms at a collective level by evaluating the outcomes most incentivized by the norms in place. We argue that this type of normative reasoning, where agents are endowed with an understanding of norms' moral implications, leads to value-awareness in autonomous agents. Additionally, this capability paves the way for agents to align the norms enforced in their societies with respect to the human values instilled in them, by complementing the value-based reasoning on norms with agreement mechanisms to help agents collectively agree on the best set of norms that suit their human values. Overall, our agent model goes beyond the treatment of values as inert labels by connecting them to normative reasoning and to the social functionalities needed to integrate value-aware agents into our modern hybrid human-computer societies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/18/2021

Value alignment: a formal approach

principles that should govern autonomous AI systems. It essentially stat...
research
01/28/2017

Practical Reasoning with Norms for Autonomous Software Agents (Full Edition)

Autonomous software agents operating in dynamic environments need to con...
research
04/02/2020

Improving Confidence in the Estimation of Values and Norms

Autonomous agents (AA) will increasingly be interacting with us in our d...
research
12/31/2020

Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences

In social settings, much of human behavior is governed by unspoken rules...
research
03/03/2021

Morality, Machines and the Interpretation Problem: A value-based, Wittgensteinian approach to building Moral Agents

We argue that the attempt to build morality into machines is subject to ...
research
09/01/2022

In conversation with Artificial Intelligence: aligning language models with human values

Large-scale language technologies are increasingly used in various forms...
research
05/08/2021

Solving social dilemmas by reasoning about expectations

It has been argued that one role of social constructs, such as instituti...

Please sign up or login with your details

Forgot password? Click here to reset