There are several challenges related to developing algorithms that can interact with human-level performance in real-world environments, such as computer games. Researchers often use toy experiments when working with Reinforcement Learning (RL), because it is easier, cheaper and consumes less time to orchestrate. With several applications for RL in daily life, it has become an essential field of research [11, 3]. However, existing learning platforms for games have major limitations such as few game environments and little environment control.
OpenAI is a non-profit company that is currently one of the leading researchers of RL. OpenAI Universe is a software platform that has several game environments aimed at artificial research. The problem with this software is that individual developers are not directly permitted to supplement new environments to the repository, and there is little documentation on how to contribute to new environments. FlashRL changes this with our proposed architecture as the control is given back to each researcher.
is a multimedia software platform used for the production of applications and animation. The Flash run-time was recently declared deprecated by Adobe, and by 2020, no longer supported. Flash is still frequently used in web applications, and there are several thousand games created for this platform. Several browsers have removed support for Flash, making it impossible to access the mentioned game environments. Games have proven to be an excellent area of machine learning benchmarking, due to size and diversity of its state-space. It is therefore essential to preserve Flash as an environment for reinforcement learning.
Automating Flash applications is a relatively untouched area. The technology has been succeeded by several better options for web development, for example, HTML5. This makes it hard for algorithms to control Flash environments programmatically. There are already reinforcement learning platforms that support Flash games as part of their game library, but these use browsers to execute the Flash run-time.
Figure 1 illustrates how interaction with the Flash environment would typically be carried out through browser automation software such as Selenium. Selenium can automate most modern browsers. It does not directly support Flash automation, but can easily be used for this purpose with minimal customisation . With the loss of browser support, the difficulty of controlling Flash applications increases, and there is a significant risk that excellent game environments for reinforcement learning are lost.
FlashRL is unique for reinforcement learning as it allows researchers to use any desired Flash environment. It gives full control of the game environment and is not based on running Flash applications in the browser.
FlashRL is targeted research in reinforcement learning, but can also be used in other machine learning algorithms. It supports all kinds of Flash applications but is primarily used for agent-based gameplay. Several thousand game environments are included in the first release of the software111Author of this paper takes no credit for any game environments. Multitask 2 is a Flash game that is excellent for reinforcement learning as it requires the agent to perform several tasks simultaneously. We show in this paper that our learning platform can be used to train novel reinforcement algorithms without any customisation.
In Section 2, we discuss related work for existing learning platforms in machine learning. We also argue why web browsers are no longer viable as Flash run-time. Section 3 briefly outline what reinforcement learning is and explains how Q-Learning works. Section 4 outlines the proposed platform and thoroughly describe its underlying architecture. In Section 5 we show initial results of utilizing the proposed learning platform for reinforcement learning. At Section 6 summarises the work and argue why the proposed learning platform is used for reinforcement learning research. Section 7 outlines a road-map for further development of the platform.
2 Related Work
With the increasing popularity in RL, there is a need for flexible learning platforms. Several learning platforms exist that can run a limited number of games, but no platform that features an open-source interface with possibility to run any Flash game.
Bellemare et al. provided in 2012 a learning platform Arcade Learning Environment
(ALE) that enabled scientists to conduct edge research in general deep learning. The package provided hundreds of Atari 2600 environments that in 2013 allowed Minh et al. to do a breakthrough with Deep Q-Learning and A3C. The platform has been a key component in several breakthroughs in RL research. [8, 9, 7]
In 2016, Brockman et al. from OpenAI released GYM which they referred to as "a toolkit for developing and comparing reinforcement learning algorithms" . GYM provides various types of environments from following technologies : Algorithmic tasks, Atari 2600, Board games, Box2d physics engine, MuJoCo physics engine, and Text-based environments. OpenAI also hosts a website where researchers can submit their performance for comparison between algorithms. GYM is open-source and encourages researchers to add support for their environments.
OpenAI recently released a new learning platform called Universe. This environment further adds support for environments running inside VNC. It also supports running Flash games and browser applications. However, despite OpenAI’s open-source policy, they do not allow researchers to add new environments to the repository. This limits the possibilities of running any environment. Universe is, however, a significant learning platform as it also has support for desktop games like Grand Theft Auto IV, that allow for research in autonomous driving .
3 Reinforcement Learning
Reinforcement learning can be considered hybrid between supervised and unsupervised learning. We implement what we call an agent that acts in our environment. This agent is placed in the unknown environment where it tries to maximize the environmental reward.
Markov Decision Process (MDP) is a mathematical method of modeling decision-making within an environment. We often use this technique when utilizing model-based RL algorithms. In Q-Learning
, we do not try to model the MDP. Instead, we try to learn the optimal policy by estimating the action-value function, yielding maximum expected reward in state s executing action a. The optimal policy can then be found by
This is derived from Bellman’s Equation, because we can consider , the utility function to be true. This gives us the ability to derive following update-rule equation from Bellman’s work:
This is an iterative process of propagating back the estimated Q-value for each discrete time-step in the environment. It is guaranteed to converge towards the optimal action-value function, as i [12, 8]. At the most basic level, Q-Learning utilize a table for storing pairs. But we can instead use a non-linear function approximation in order to approximate .
describes tunable parameters for approximator. Artificial Neural Networks (ANN) are a popular function approximator, but training using ANN is relatively unstable.
4 Flash Reinforcement Learning (FlashRL)
The proposed platform is an interface that acts as a bridge between the Gnash Flash player and the reinforcement learning algorithms. Flash Reinforcement Learning (FlashRL) is a new platform that allows researchers to run algorithms on any Flash-based game efficiently.
The learning platform is developed primarily for the operating system Linux but is likely to run on Cygwin with few modifications. There are several key components that FlashRL uses to operate adequate, see Figure 2. It uses a Linux library called XVFB to create a virtual frame-buffer that is used for graphics rendering . Inside this frame-buffer, a Flash game chosen by the researcher is executed by a third party flash player, for example, Gnash. A VNC server serves the XVFB frame-buffer and allows FlashRL to access it by utilizing a VNC Client. The VNC Client can then issue commands like keyboard presses and mouse movements. The VNC Client pyVLC was specially made for this learning platform. The code base originates from python-vnc-viewer . The last component of FlashRL is the Reinforcement Learning API that allows the developer to access the input/output of the VNC client. This makes it easy to develop sequenced algorithms by using the API callbacks or manually by threading.
Figure 3 illustrates two methods of accessing the frame-buffer from the Flash Game. Both approaches are sufficient to perform reinforcement learning, but each has its strength and weaknesses. Method 1, seen in Figure 3
allows the developer to get frames served at a fixed rate, for example, 60 frames per second. Method 2 does not restrict the frequency of how fast the frame-buffer is captured. This is preferable for developers that do not require images from fixed time-steps as it requires less processing power per frame. The framework was developed with deep learning in mind and is proven to work with Keras and Tensorflow.
Several thousand game environments are shipped with the initial version of FlashRL. These game environments were gathered from different sources on the web. FlashRL has a relatively small code-base and to preserve this size, all of the Flash games are hosted remotely. The quality varies, and some of the games are not tested or labeled. Most games are however tested and can be played without issues, see Figure 4.
This section presents experiments of reinforcement learning algorithms applied in FlashRL. We use the game Multitask 2 222Multitask 2 - http://multitaskgames.com/multitask-2.html to test the learning platform. Multitask 2 was chosen because it challenges the algorithm to master four different mini-games simultaneously.
The experiments are grouped in two. The first experiment determines the hardware requirements of the platform and benchmarks the speed of critical operations. The second experiment is an implementation of standard Deep Q-Learning trained on raw state images from Multitask 2 to perform game actions. The latter is meant as a proof of concept that RL algorithms can be applied in FlashRL.
All experiments were conducted on Ubuntu Linux 17.04 x64 running Python 3.5.3. The machine has 64GB memory, Nvidia GeForce 1080TI, and Intel I7-7770k as hardware.
5.1 Multitask 2
Figure 5 illustrates the game-play of Multitask 2. The game is split into four-game phases. The first phase (lower right corner in Figure 5) is a single paddle that the player must balance a ball on. In state two (lower left corner in Figure 5) , the player must control the second paddle to avoid arrows traveling towards it. The third phase (upper right corner in Figure 5) consist of an arrow with mechanics relatable to the game Flappy Bird . In the final phase (upper left corner in Figure 5), the player must additionally jump over holes on the ground. For the player to succeed the game, he must control eight actions simultaneously. The score is calculated by adding a single point for each second survived in the game.
5.2 Experiment 1: Hardware Requirements
Recall from section 4 that there are two methods of accessing the frame-buffer. The first method (Method 1) is based on retrieving the frame-buffer at fixed time intervals. The second method (Method 2) does not have any interval restriction. This makes Method 2 faster because it does not require sleep between frames. This causes the framework to consume all available CPU, which is not always preferable.
We can see from Figure 6 that using Method 1 with the interval set to 30 fps uses approximately 5% of the CPU. Increasing the interval to 300 increases it to 13%. We gradually increased the interval until the CPU ran at maximum. A single I7-7700k can compute approximately 6300 fps images from the frame-buffer before struggling to keep up.
The GPU Did not recognize any load during these test because the Flash environment is software rendered. Memory consumed were between 200MB and 500MB depending on the speed. We believe that the reason for memory increase is that Python does not garbage collect old frame-buffer snapshots between iterations, and therefore gets an increased memory load.
5.3 Experiment 2: Reinforcement Learning
Deep Q-Network (DQN) is a novel algorithm architecture developed by Minh et al. at Google DeepMind. It combines Q-Learning estimating Q-Values from a neural network. 
In our tests we used Double Q-Learning from Hasselt et al. . We also used Dueling from Wang et al. that increases the learning precision by using two estimators: state-value and action-advantage function . We used a discount factor of 0.99, learning rate of 0.001 and mini-batch of 16. We used exploration/exploitation strategy with -greedy where it started at 0.9 and finished at 0.1. The annealing was set to 10 000 steps. This is a relatively low epsilon phase. But it seemed to work well in this environment.
Figure 7 illustrates the training of DQN, where the x-axis represents episodes of the game and y-axis score before reaching the terminal state. The agent had troubles adapting to the third phase (see Section 5.1). Phase 3 is relatively hard to master because it requires the user balance the arrow in the air. At around 230 episodes we saw a drop in score. This is because the network seems to prioritize the first phase of the game. It reached the second phase a few times but was not able to successfully control the paddle for longer periods of time. This is why it stales at approximately 400 episodes. We believe that the network could have performed better with additional training time. It trained for a total of two days. Hopefully, it will be easier to train the network when FlashRL can speed-forward games, see section 7. The results are overall acceptable as we can see that FlashRL deliver quality states that a reinforcement learning agent can learn from.
FlashRL offers an easy-to-use architecture for performing RL in Flash-based games. It is demonstrated to work well for Multitask 2, one of the environments included. FlashRL fills the gap that emerged with the deprecation of Flash, Its main focus is RL, but can also be used for other machine learning genres. This paper shows that FlashRL can be used to train RL algorithms, in particular, Multitask 2. The work shows promising results and continuing to expand the game repository may provide new insights about RL in the future.
FlashRL will be kept alive as long as flash environments are an asset to the machine learning community. It is available to the public at https://github.com/UIA-CAIR/FlashRL, and can easily be adapted to every research requirement.
7 Future Work
Several improvements are planned for FlashRL. This paper outlined features of the initial version of the FlashRL, and it is by far sufficient for simple reinforcement learning research. As seen in section 5, a Deep Q-Learning based agent can successfully learn from the environment Multitask and gradually perform better.
7.1 Speed-forward Option
Learning algorithms often require several thousand episodes to gain expert knowledge of the environment. FlashRL is currently limited to the speed of which the game loop is executed (usually 30 fps in real-time). An important improvement would be to lift this restriction and allow algorithms to train at an accelerated rate. This would certainly improve training duration of feedback based algorithms.
7.2 Game Repository Analysis
The game repository features many unlabeled, unrated and untested games. Some games are potentially useless in a machine learning setting and require a review. The review phase is time-consuming, and authors of this paper did not have enough time to analyze each of the environments manually. The goal is to add labels and categorize all games in the repository gradually.
A future goal is to allow execution of algorithms from a web interface and to add gamification aspects to the library. This would potentially create competition between researchers much like Kaggle and OpenAI Universe.
7.4 Cross-Platform Support
FlashRL is in the initial version, only supported in Python 3 on the Linux platform. The goal is to extend it so that it also can run without modifications on Microsoft Windows operating systems.
Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling.
The arcade learning environment: An evaluation platform for general
IJCAI International Joint Conference on Artificial Intelligence, volume 2015-Janua, pages 4148–4152, 2015.
-  Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. jun 2016.
-  Michael E. Grost, Trent Jaeger, Ming C. Leu, Dinesh K. Pai, Qiuming Zhu, A. Kusiak, M. Chen, F M. Brown, S S. Park, Ganapathy S. Kumar, P H. Cohen, B. Bidanda, O B. Arinze, Fatma Mili, Dahuan Shi, Patricia Zajko, and Ali Noui-Mehidi. Applications of Artificial Intelligence. In Birendra Prasad, S N. Dwivedi, and K B Irani, editors, CAD/CAM Robotics and Factories of the Future: Volume II: Automation of Design, Analysis and Manufacturing, pages 165–229. Springer Berlin Heidelberg, Berlin, Heidelberg, 1989.
-  Guru99. Flash Testing with Selenium, 2017.
-  Harold L Hunt and II Jon Turney. Cygwin/X Contributor’s Guide, 2004.
-  Yuxi Li. Deep Reinforcement Learning: An Overview. jan 2017.
-  Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. feb 2016.
-  Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing Atari With Deep Reinforcement Learning. In NIPS Deep Learning Workshop. 2013.
-  Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
-  Matthew Piper. How to Beat Flappy Bird: A Mixed-Integer Model Predictive Control Approach. PhD thesis, The University of Texas at San Antonio, 2017.
-  Stuart J Russell and Peter Norvig. Artificial Intelligence: A Modern Approach, volume 9. 1995.
-  Richard S Sutton and Andrew G Barto. Reinforcement Learning: An Introduction. IEEE Transactions on Neural Networks, 9(5):1054–1054, 1998.
-  Techtonik. python-vnc-viewer, 2015.
-  Hado van Hasselt, Arthur Guez, and David Silver. Deep Reinforcement Learning with Double Q-learning. sep 2015.
-  Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. Dueling Network Architectures for Deep Reinforcement Learning. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pages 1995–2003, 2016.