Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings

by   Jesse Zhang, et al.

Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous, imperiling the RL agent, other agents, and the environment. To overcome this difficulty, we propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments such as in a simulator, before it adapts to the target environment where failures carry heavy costs. We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk, which in turn enables relative safety through risk-averse, cautious adaptation. CARL first employs model-based RL to train a probabilistic model to capture uncertainty about transition dynamics and catastrophic states across varied source environments. Then, when exploring a new safety-critical environment with unknown dynamics, the CARL agent plans to avoid actions that could lead to catastrophic states. In experiments on car driving, cartpole balancing, half-cheetah locomotion, and robotic object manipulation, CARL successfully acquires cautious exploration behaviors, yielding higher rewards with fewer failures than strong RL adaptation baselines. Website at



There are no comments yet.


page 1

page 5

page 7

page 9


Automatic Risk Adaptation in Distributional Reinforcement Learning

The use of Reinforcement Learning (RL) agents in practical applications ...

Why? Why not? When? Visual Explanations of Agent Behavior in Reinforcement Learning

Reinforcement Learning (RL) is a widely-used technique in many domains, ...

From self-tuning regulators to reinforcement learning and back again

Machine and reinforcement learning (RL) are being applied to plan and co...

Context-Aware Safe Reinforcement Learning for Non-Stationary Environments

Safety is a critical concern when deploying reinforcement learning agent...

Conservative Safety Critics for Exploration

Safe exploration presents a major challenge in reinforcement learning (R...

Learning to Collide: An Adaptive Safety-Critical Scenarios Generating Method

Long-tail and rare event problems become crucial when autonomous driving...

Quantifying Multimodality in World Models

Model-based Deep Reinforcement Learning (RL) assumes the availability of...

Code Repositories


Github Repo for CARL: Cautious Adaptation for RL in Safety Critical Settings

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.