Search-Based Testing Approach for Deep Reinforcement Learning Agents

06/15/2022
by   Amirhossein Zolfagharian, et al.
0

Deep Reinforcement Learning (DRL) algorithms have been increasingly employed during the last decade to solve various decision-making problems such as autonomous driving and robotics. However, these algorithms have faced great challenges when deployed in safety-critical environments since they often exhibit erroneous behaviors that can lead to potentially critical errors. One way to assess the safety of DRL agents is to test them to detect possible faults leading to critical failures during their execution. This raises the question of how we can efficiently test DRL policies to ensure their correctness and adherence to safety requirements. Most existing works on testing DRL agents use adversarial attacks that perturb states or actions of the agent. However, such attacks often lead to unrealistic states of the environment. Their main goal is to test the robustness of DRL agents rather than testing the compliance of agents' policies with respect to requirements. Due to the huge state space of DRL environments, the high cost of test execution, and the black-box nature of DRL algorithms, the exhaustive testing of DRL agents is impossible. In this paper, we propose a Search-based Testing Approach of Reinforcement Learning Agents (STARLA) to test the policy of a DRL agent by effectively searching for failing executions of the agent within a limited testing budget. We use machine learning models and a dedicated genetic algorithm to narrow the search towards faulty episodes. We apply STARLA on a Deep-Q-Learning agent which is widely used as a benchmark and show that it significantly outperforms Random Testing by detecting more faults related to the agent's policy. We also investigate how to extract rules that characterize faulty episodes of the DRL agent using our search results. Such rules can be used to understand the conditions under which the agent fails and thus assess its deployment risks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/03/2023

SMARLA: A Safety Monitoring Approach for Deep Reinforcement Learning Agents

Deep reinforcement learning algorithms (DRL) are increasingly being used...
research
05/22/2023

Testing of Deep Reinforcement Learning Agents with Surrogate Models

Deep Reinforcement Learning (DRL) has received a lot of attention from t...
research
03/21/2022

ReCCoVER: Detecting Causal Confusion for Explainable Reinforcement Learning

Despite notable results in various fields over the recent years, deep re...
research
12/24/2020

Auto-Agent-Distiller: Towards Efficient Deep Reinforcement Learning Agents via Neural Architecture Search

AlphaGo's astonishing performance has ignited an explosive interest in d...
research
01/20/2021

Shielding Atari Games with Bounded Prescience

Deep reinforcement learning (DRL) is applied in safety-critical domains ...
research
04/24/2019

How You Act Tells a Lot: Privacy-Leakage Attack on Deep Reinforcement Learning

Machine learning has been widely applied to various applications, some o...
research
09/14/2021

Deep hierarchical reinforcement agents for automated penetration testing

Penetration testing the organised attack of a computer system in order t...

Please sign up or login with your details

Forgot password? Click here to reset