EasyRL: A Simple and Extensible Reinforcement Learning Framework

08/04/2020 ∙ by Neil Hulbert, et al. ∙ University of Washington 68

In recent years, Reinforcement Learning (RL), has become a popular field of study as well as a tool for enterprises working on cutting-edge artificial intelligence research. To this end, many researchers have built RL frameworks such as openAI Gym and KerasRL for ease of use. While these works have made great strides towards bringing down the barrier of entry for those new to RL, we propose a much simpler framework called EasyRL, by providing an interactive graphical user interface for users to train and evaluate RL agents. As it is entirely graphical, EasyRL does not require programming knowledge for training and testing simple built-in RL agents. EasyRL also supports custom RL agents and environments, which can be highly beneficial for RL researchers in evaluating and comparing their RL models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Reinforcement Learning (RL) is a formal model for automated decision-making. RL is a growing field with several new techniques proposed to improve the applicability and generalization of RL towards real-world problems. Existing RL techniques, however, have been mostly applied and evaluated on games Mnih et al. (2015) and real-world use-cases of RL are hard to find. RL training, in general, is a cumbersome process: hyper-parameter tuning, improving sample efficiency as well as training stability Schulman et al. (2017) have become indispensable during the training process. Thereby, it has become necessary to have a solid background knowledge in RL and sufficient software development skills to develop and train RL agents. The above requirements restrict the RL audience to only a handful of researchers who are experts in RL. Though RL can solve decision-making problems in healthcare, transportation, networking, etc., people in these fields may find it cumbersome to train RL agents without sufficient expertise.

Our main goal is to build a RL framework that can be used by diverse audiences from different domains. To this end, we propose the EasyRL framework for both native as well as non-native RL users to easily develop, train, and evaluate RL agents. The existing RL frameworks Keras-RL, Tensorforce, Horizon, HuskaRL, SimpleRL, AI-ToolBox, and Coach provide a range of built-in deepRL (deep reinforcement learning) techniques and some of them, such as Coach, create visualization of the training process for debugging purposes 

Winder (2019). However, they do not support a user-friendly Graphical User Interface (GUI). Even the popular OpenAI gym only supports a range of environments to evaluate RL agents. Further, all the existing frameworks require some amount of RL and programming knowledge.

Our EasyRL framework offers an interactive GUI to build, train, and evaluate RL agents. It hosts a number of built-in RL agents (algorithms) and environments. Additionally, a user may develop his own custom RL agent or environment and add it to the framework. The EasyRL framework requires no (or minimal) programming skills for simple RL training. This differs from many previously existing RL frameworks by greatly simplifying access and reducing the technical barrier of entry to training RL agents. As the amount of personal computing resources has tremendously increased, the applicability of RL techniques to well-defined environments can be better leveraged to allow non-native RL users to themselves train RL agents. By introducing the user to RL and giving them the tools to create agents and environments themselves, our framework will improve the visibility as well as applicability of RL across different domains.

Figure 1: Structure of the EasyRL Framework
(a) Choosing Environment & RL Agent
(b) Setting Hyper-Parameters & Visualization
Figure 2: EasyRL Framework GUI

The EasyRL Framework

The proposed EasyRL framework allows the training and evaluation of RL agents on a variety of openAI gym as well as custom real-world environments. EasyRL follows a highly modularized implementation with abstractions such as Agent and Environment. The sequence diagram for navigating through the framework is shown in Fig. 1. The GUI for the framework is shown in Fig. 2. The user can select from a variety of RL agents and environments (see Fig. (a)a). The user can then set the hyper-parameters for training (see Fig. (b)b). The training results are plotted using metrics such as mean rewards and training loss. The graphs also show the epsilon annealing process. The training environment is dynamically rendered on the screen. The rendering speed can also be changed (see Display Episode Speed in Fig. (b)b).

The trained RL model as well as results can be saved for future use. The user can load a previously trained RL agent using Load Model and run test cases as well as visualize the results. The framework provides options to create custom RL agents and environments using Load Agent and Load Environment. We provide a detailed help guide to assist the user with these commands along with tooltip texts.

Figure 3: API for Custom Environment

RL Algorithms & Environments

EasyRL currently hosts a list of model-free algorithms that can handle both fully-observable environments such as Q-learning Watkins and Dayan (1992), SARSA Rummery and Niranjan (1994), DQN Mnih et al. (2015), DDQN Van Hasselt, Guez, and Silver (2016), PPO Schulman et al. (2017), REINFORCE Williams (1992) and partially-observable environments such as DRQN Hausknecht and Stone (2015) and ADRQN Zhu et al. (2017). The off-policy deepRL techniques mentioned above are implemented using standard experience replay for sampling experiences. It should be noted that our framework also supports model-based RL agents. The user is also allowed to create custom RL agents and import them to the EasyRL framework (as a python file).

The framework hosts a variety of OpenAI Gym environments (classic control and atari). The user can also create a custom environment by following the API shown in Fig. 3. We have implemented some custom (real-world) environments for selecting sellers in e-markets Irissappane et al. (2014) and chemotherapeutic drug-dosage for cancer treatment Padmanabhan, Meskin, and Haddad (2017).

The EasyRL framework is highly modularized and extensible (MVC design pattern). The EasyRL framework is predominately written in python and supports both tensorflow as well as pytorch deep learning libraries. EasyRL also supports C++ native implementations (see DRQNNative, DDQNNative) via CFFI which speeds up the training atleast by

times. The framework, by default, uses local CPU/GPU during training, however, it can be easily configured to use resources remotely. Further, EasyRL supports training multiple RL agents in parallel via the python Threading library. The EasyRL framework is easy to install and is supported by linux, windows as well as iOS. We also provide a command-line interface, offering the same functionality as the GUI.

Demo

Our demonstration will show how a GUI can greatly simplify the process of developing, training, and testing a RL agent. We will demonstrate our simple installation procedure and show how a user with with minimal knowledge of RL and even programming can successfully train a RL agent. In addition to training and testing different combinations of agents and environments, we will show how to save and load pre-trained RL agents along with the results from a training or test run. We will demonstrate how to create custom environments and RL agents and show the training results for one such custom environment including the visualization graphs. Furthermore, we will show how multiple agents can be trained simultaneously and the improvement in training speed when native C++ implementation is used.

References