Log In Sign Up

Be Considerate: Objectives, Side Effects, and Deciding How to Act

by   Parand Alizadeh Alamdari, et al.

Recent work in AI safety has highlighted that in sequential decision making, objectives are often underspecified or incomplete. This gives discretion to the acting agent to realize the stated objective in ways that may result in undesirable outcomes. We contend that to learn to act safely, a reinforcement learning (RL) agent should include contemplation of the impact of its actions on the wellbeing and agency of others in the environment, including other acting agents and reactive processes. We endow RL agents with the ability to contemplate such impact by augmenting their reward based on expectation of future return by others in the environment, providing different criteria for characterizing impact. We further endow these agents with the ability to differentially factor this impact into their decision making, manifesting behavior that ranges from self-centred to self-less, as demonstrated by experiments in gridworld environments.


page 1

page 2

page 3

page 4


Backdoor Detection in Reinforcement Learning

While the real world application of reinforcement learning (RL) is becom...

Reinforcement Learning-based Autoscaling of Workflows in the Cloud: A Survey

Reinforcement Learning (RL) has demonstrated a great potential for autom...

Lazy-MDPs: Towards Interpretable Reinforcement Learning by Learning When to Act

Traditionally, Reinforcement Learning (RL) aims at deciding how to act o...

The Sandbox Environment for Generalizable Agent Research (SEGAR)

A broad challenge of research on generalization for sequential decision-...

There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning

We propose to learn to distinguish reversible from irreversible actions ...

Reactive Reinforcement Learning in Asynchronous Environments

The relationship between a reinforcement learning (RL) agent and an asyn...

The Value of Information When Deciding What to Learn

All sequential decision-making agents explore so as to acquire knowledge...