Log In Sign Up

Mitigating Negative Side Effects via Environment Shaping

by   Sandhya Saisubramanian, et al.

Agents operating in unstructured environments often produce negative side effects (NSE), which are difficult to identify at design time. While the agent can learn to mitigate the side effects from human feedback, such feedback is often expensive and the rate of learning is sensitive to the agent's state representation. We examine how humans can assist an agent, beyond providing feedback, and exploit their broader scope of knowledge to mitigate the impacts of NSE. We formulate this problem as a human-agent team with decoupled objectives. The agent optimizes its assigned task, during which its actions may produce NSE. The human shapes the environment through minor reconfiguration actions so as to mitigate the impacts of the agent's side effects, without affecting the agent's ability to complete its assigned task. We present an algorithm to solve this problem and analyze its theoretical properties. Through experiments with human subjects, we assess the willingness of users to perform minor environment modifications to mitigate the impacts of NSE. Empirical evaluation of our approach shows that the proposed framework can successfully mitigate NSE, without affecting the agent's ability to complete its assigned task.


Influencing Reinforcement Learning through Natural Language Guidance

Interactive reinforcement learning agents use human feedback or instruct...

Deep Reinforcement Learning from Policy-Dependent Human Feedback

To widen their accessibility and increase their utility, intelligent age...

Computational Empathy Counteracts the Negative Effects of Anger on Creative Problem Solving

How does empathy influence creative problem solving? We introduce a comp...

Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems

Autonomous agents acting in the real-world often operate based on models...

Inherently Explainable Reinforcement Learning in Natural Language

We focus on the task of creating a reinforcement learning agent that is ...

Safe Deep RL in 3D Environments using Human Feedback

Agents should avoid unsafe behaviour during both training and deployment...

Fusing Interpretable Knowledge of Neural Network Learning Agents For Swarm-Guidance

Neural-based learning agents make decisions using internal artificial ne...

Code Repositories