Since DeepMind’s 2015 Nature paper, the Arcade Learning Environment (ALE) has become the de facto deep RL benchmark for new training algorithms (Bellemare et al., 2013; Mnih et al., 2015; Machado et al., 2017). ALE has several appealing qualities: humans learn to play Atari and become more skilled with experience, it is a “real-world” environment that was not originally constructed to evaluate RL methods, and it has greater complexity than prior environments (e.g., GridWorld, mountain car).
ALE has been used in several ways to evaluate the performance of deep RL agents. The vast majority of evaluations follow a version of the approach described by Bellemare et al. (2012)
: first researchers choose network architectures and tune hyperparameters on small set of Atari games; then they train agents using those hyperparameters on new games, reporting the learning curves (or a statistic of a collection of those curves)(Mnih et al., 2015; Van Hasselt et al., 2016; Mnih et al., 2016; Hessel et al., 2018).
While ALE has enabled demonstration and evaluation of much more complex behaviors of deep RL agents, it presents challenges as a suite of evaluation environments for topics on the frontier of deep RL.
Challenge: Limited variation within games. Very little about individual games can be systematically altered, so ALE is poorly suited to testing how changes in the environment affect training and performance. New benchmarks such as OpenAI’s Sonic the Hedgehog emulator and CoinRun inject environmental variation into the training schedule, while introducing train/test splits (Nichol et al., 2018; Cobbe et al., 2018). Similarly, Zhang et al. (2018) suggest benchmarks that incorporate the kind of non-random noise found in nature. Kansky et al. (2017) implemented Breakout variants in order to achieve variation for generalization.
Challenge: No counterfactual evaluation. Meanwhile, assertions about intelligent agent behavior remain untestable in the face of black-box evaluation environments. For example, ALE does not enable testing the conjecture that agents trained on Breakout learn to build tunnels (Mnih et al., 2015) or that they enter a tunneling mode (Greydanus et al., 2018). No system currently permits experiments to answer counterfactual questions about agent behavior.
Contribution. We propose ToyBox, a suite of high-performance and highly parameterized Atari-like environments designed for the purpose of experimentation. We demonstrate that the ToyBox implementations of three Atari 2600 games achieve similar performance to their ALE counterparts across three deep RL algorithms. We demonstrate that ToyBox enables a range of post-training analyses not previously possible, and we show that ToyBox is orthogonal to concurrent efforts in deep RL to address issues of robustness, generalization, and reproducible evaluation.
Organization. The rest of the paper is organized as follows: Section 2 introduces the ToyBox design and its functional capabilities. Section 3 describes our evaluation, including performance and fidelity testing against the ALE. Section 4 describes four behavioral tests we present as case studies. Related work not otherwise addressed can be found in Section 5. Section 6 discusses ToyBox applications beyond the scope of this paper. We conclude in Section 7.
2 ToyBox: System Design
ToyBox is a high-performance, highly parameterized suite of Atari-like games implemented in Rust, with bindings to Python. The suite currently contains three games: Breakout, Amidar, and Space Invaders. We chose these games for diversity of genre (paddle-based, maze, and shooter, respectively) and likely familiarity to readers.111Although Amidar may not be familiar, it is very similar to PacMan, but with simpler rules.
Software requirements. Atari 2600 games were designed for human players. For ToyBox
, the primary user is a reinforcement learning algorithm, and we expect machine learning researchers to be able to customize gameplay. To that end, we developedToyBox to meet the following set of software requirements
ToyBox should only leverage the CPU, even for graphical tasks. Although modern games leverage the GPU for faster rendering, we expect the machine learning libraries to be using the GPU, and so we wish to create our screen images using CPU-only.
ToyBox should be at least as efficient as the Stella-emulated version of the game. Since reinforcement learning algorithms require millions of frames of training data, we must be able to simulate and render millions of frames in order to enable efficient use of computation resources for learning.
ToyBox should provide for data-driven user customization. Changing the bricks in Breakout, the board in Amidar, or the alien configuration in Space Invaders should not require re-compilation of the core game code, nor should it require the ability to write Rust.
ToyBox should be accessible through OpenAI Gym, which is a Python API. Furthermore, ToyBox should be usable as a drop-in replacement for the analogous ALE environment.
Architecture. Figure 1 depicts the ToyBox architecture. The game logic is written in Rust. Every game implements two core structs: Config and State. The Config struct contains data that we would generally only expect to be initialized at the start of an episode (i.e., a game, which may include multiple lives). The State struct contains data that may change between frames. At any point during execution, a ToyBox game can be paused, state exported and modified, and resumed with the new state.
Aside: Why Atari? Given the range of open problems in deep RL, creating a new ALE-like system may not seem like an effective way to facilitate cutting-edge deep RL research. However, there are still many poorly understood properties of Atari games and agent behavior on them.
There are many axes of complexity in the reinforcement learning environment: required planning horizon, reward function assignment, number of actions available, environment stochasticity, and state representation are all different components affecting the total complexity of the environment, each presenting unique challenges for policy learning:
State Representation Operating over pixel representations like Atari games means the agent experiences a huge state space, but the underlying objects defining those sensory representations can often be represented more compactly (Guestrin et al., 2003; Diuk et al., 2008; Kansky et al., 2017). Models trained on a compact representation of state can train more quickly (Keramati et al., 2018; Melnik et al., 2018), but the underlying environment dynamics have not changed.
Action Space Complexity Increasing the size of the action space can quickly lead to intractable computation for Q-value or policy function approximation, especially when that approximation is computationally expensive as in deep RL (Dulac-Arnold et al., 2015). Atari games available in ALE allow 4-18 actions (Mnih et al., 2013). While this may seem small, even environments with as few as 10 actions can present challenges to efficient learning (Dulac-Arnold et al., 2012).
Environment Stochasticity Previous work has shown that environments with higher stochasticity can be more difficult for some types of RL algorithms to learn (Henderson et al., 2017). Atari games are deterministic, but ToyBox enables Atari-like environments with parameterized stochasticity.
Despite these open challenges, it has been argued that Atari’s environments are not sufficiently complex to evaluate reinforcement learning agents because the source code is small (Zhang et al., 2018). However, source code size as minimum description length is a poor proxy for environment complexity. As Raghu et al. (2017) have shown, Erdős-Selfridge-Spencer games can be represented quite compactly, simply requiring the assignment of two parameters, but represent a large combinatorial space of potential games for evaluating RL algorithms.
Ultimately, each of these axes of complexity is observed through the agent’s interaction with the environment. Figure 2 depicts the traditional RL diagram, overlaid with the interventions a researcher may make, as well as the models that inform both the researcher’s and the agent’s decision-making. The solid arrows represent current avenues for experimentation used by deep RL researchers to evaluate models: sensory perception of state, action selection manipulations, and reward function definition.
Without intervention or introspection of the environment, researchers must use observational data of agent behavior to reason about experimental results. ToyBox enables new methodology for experimentation in deep RL. Ultimately, we mimic Atari games because there is a font of untapped research questions related to testing and explaining the behavior of deep RL agents. We chose our initial set of games to establish that there are surprising results on even seemingly “solved” games such as Breakout and Space Invaders.
Performance. We achieved 1 by designing a simple CPU graphics library; we demonstrate ToyBox efficiency (2) in Table 1. Note that ToyBox permits researchers to process games entirely in grayscale and achieve substantial additional performance gains. However, since this is not a feature offered in ALE, we only compared against the ToyBox RGB(A) rendering.
|Raw kFPS||Gym kFPS|
|Breakout||ALE||52 (1.3)||3.4 (0.065)|
|ToyBox||230 (5.4)||7.2 (0.23)|
|Amidar||ALE||61 (2.9)||3.0 (0.083)|
|ToyBox||250 (2.3)||6.0 (0.112)|
|Space Invaders||ALE||55 (1.3)||3.9 (0.072)|
|ToyBox||120 (3.4)||5.2 (0.082)|
. Thousand frames per second (kFPS) on a MacBook Air (OSX Version 10.13.6) with a 1.6 GHz Intel Core i5 processor having 4 logical cores. Rates are averaged over 30 trials of 10 000 steps and reported to two significant digits, with standard error. We consistently observed an approximately 95% slowdown when interacting with both ALE (C++) andToyBox (Rust) via OpenAI Gym. All benchmarks are run from CPython 3.5 and include FFI overhead (via atari-py for ALE).
Fidelity. Figure 3 depicts frames at roughly equivalent points in the execution of a fixed action trace. If ToyBox perfectly reproduced the games in ALE, the frames would be exactly the same. Three factors prevent exact replication of games: (1) Atari 2600 game source code is not available, (2) there are no formal specifications and few informal specifications of games,222Some informal specifications contain errors (e.g., the Atari manual for Breakout refers to a row of bricks that does not exist). and (3) inferring arbitrarily complex programs from data is extremely challenging (Raychev et al., 2016). Therefore, comparing frames of program traces is not a sufficient measure of how closely we have approximated ALE games.
A human study could help assess whether the two implementations are are sufficiently alike. While we did solicit feedback from Atari aficionados during development (via a playable interface), we did not view human perception of equivalence as a sufficient measure of fidelity. Human players rely on unique problem-solving capabilities that deep RL agents have not yet achieved, while deep networks are undeterred by the kind of noise that can confuse humans (Szegedy et al., 2014; Dubey et al., 2018). Instead, we focused on tuning our environments to produce comparable post-training agent performance.
Methodology. We used three off-the-shelf implementations of training algorithms with default parameter settings for 5e7 steps from OpenAI Baselines (Dhariwal et al., 2017): a2c (Mnih et al., 2016), acktr (Wu et al., 2017), and ppo2 (Schulman et al., 2017). Due to issues with variability across agents and environments (Henderson et al., 2017; Clary et al., 2018; Jordan et al., 2018), we trained ten replicates for each of these training algorithms, differentiated by their random seed. Since there are various other uncontrolled sources of randomness, we evaluated each of these thirty agents per game using thirty unique random seeds over thirty games. Figure 4 depicts our results: we find that agents achieve sufficiently similar performance in each analogous environment, and have roughly equivalent rankings (idiosyncrasies discussed in the in Fig. 4 caption).
Findings. ToyBox is much faster than ALE and our ranking results in Fig. 4 show that our current implementations are comparable to their ALE counterparts. We will continue to strive for fidelity over the course of game development.
4 Case Studies
We demonstrate (3: Customization) with four case studies, in which we test the post-training performance333The analyses in this paper focus on evaluations of post-training performance, but ToyBox interventions can be applied at any time–including during training. of agents according to some hypothesis about behavior. This sort of testing is useful for evaluating a single agent prior to deployment (i.e., acceptance testing) or as existential proof for behavior under counterfactual conditions. None of these tests are currently possible in ALE because they all rely on resuming gameplay from an arbitrary modified state. Furthermore, all experiments in this section can be expressed in fewer than 200 lines of code, in addition to the code required for loading up the OpenAI baselines models. We include a code snippet from one of our experiments in Figure 7.
Breakout: Polar Angles.
An agent that has learned to play Breakout must have at least learned to hit the ball. Our first test manipulates the starting angle of the ball.
We modified the start state to change the initial launch angle of the ball in 5 increments (72 configurations). Figure 5
depicts the results. Note that the agent fails to achieve any score with horizontal ball angles: since Breakout has no gravity, balls simply bounce horizontally forever, never hitting any bricks or threatening the paddle. The agent also sometimes struggled with vertical angles. When we observed this behavior, the agent would keep the ball aligned perfectly in the center of the board, hitting it precisely in the center of the paddle, failing to make progress. This is an unexpected behavior that is entirely unlike human gameplay. In all, we found the agent to be resilient to starting angles, albeit with high variance. This suggests that an agent can be successful even with balls traveling at angles it may never have observed in training, a powerful recommendation for the training algorithms that produced such robust RL agents.
One of the most promising behaviors observed in deep RL has been the apparent ability to learn higher-level strategies. Perhaps no high-level strategy has been written about more than “tunneling” in the game of Breakout, which happens when the player clears a column of bricks, causing the ball to bounce through the hole and onto the ceiling, clearing many bricks rapidly (Mnih et al., 2015; Greydanus et al., 2018). One way to test whether an agent intentionally exploits tunneling is to give it a board with a nearly complete tunnel, save for a single brick, and test whether the agent can prioritize aiming at that single brick.
For every brick, we removed all other bricks in the column, creating a nearly-completed tunnel. Figure 5 depicts the results: the value for each brick is the reciprocal of the median number of time steps before that brick was removed.
If an agent were able to build tunnels, we would expect, for example, symmetry along one or both axes. Instead, we see that the agent clears one column in the center very quickly, the left adjacent column and some bricks in the upper left region a bit more slowly, and the remaining bricks take all about the same time. Observing agent gameplay, we saw the agents hit the ball to predictable locations, regardless of the board configuration.
Amidar: Enemy Protocol.
Suppose we would like to test whether an agent has learned to avoid adversarial elements of a game: e.g., the enemies in Amidar. To test this, we might drop the agent around the corner from an enemy, or position the enemies to “gang up” on the player, forcing the agent to move in a particular direction.
This kind of intervention is only meaningful if enemy position is a function of current location. Observation led us to conclude that enemies move in fixed loops, likely implemented as lookup tables. This contrasts with “Amidar movement” is believed to dictate enemy behavior.444An enemy moves with a diagonal velocity, flipping the vertical direction when encountering the top or bottom of the board and horizontal direction when encountering the left or right edge (https://en.wikipedia.org/wiki/Amidar) The protocol matters for intervention because, for a lookup table, moving enemies will have no effect: enemies will simply “teleport” to the next location in the lookup table.
The upper left plot in Fig. 6 shows a baseline test for how an individual trained agent performs under each of four different enemy movement protocols: (1) a lookup table, on which the model was trained, (2) the “Amidar movement” protocol, (3) a random protocol, where at each junction the enemy chooses a random direction, and (4) an adversarial protocol, where enemies explore via random turns until the player is within line of sight, at which time they move toward the player’s location. Note that, since enemies start far away from the player, the agent can (and does) easily make progress at the start of the game, regardless of enemy position. However, as the game progresses, the enemies close in and there are fewer opportunities for rewards.
The upper right plot in Fig. 6 shows a test in which enemies “gang up” on the player: the enemies start position is modified to be close to the player. We were at first surprised to see how well the agent did; however, upon examination, we found we found that the agent was using up the jump button, which allows the player to bypass enemies, at beginning of the game. The lower half of Fig. 6 depicts the results of running the baseline and test for no jumps: while the baseline performs similar, the player dies quickly for all non-lookup table enemy protocols.
Space Invaders: Shield Usage.
In Space Invaders, the player can seek refuge under three shields from the frontier of alien ships shooting down. We ran a test to see whether removing two of the three shields would cause an agent to use the remaining one more often. We also ran two baseline comparisons for a fixed amount of time: one where all sheilds are present (the default setting) and one where no shields were present.
Figure 8 shows the results under test. Since score provides an incomplete picture of agent behavior, we also tracked the agent’s location (a simple query in ToyBox). We observe that the player does not appear to change its preferred locations under any of the tests.
A Note on Negative Results: Space Invaders, as we have implemented it, has turned out to be a fairly uninteresting game. Randomly selecting from the trimmed action set that OpenAI Gym allows can lead to fairly good performance. Furthermore, our implementation, which included both random and adversarial enemy behavior, led the agent’s behavior to be invariant to randomness in enemy behavior.
Findings. We have shown a range of interventions and queries possible with ToyBox, all of which would be impossible to conduct using ALE. The interventions we demonstrated were designed to demonstrate the power of ToyBox’s design and implementation, rather than to satisfy any particular RL research agenda. We were able to rapidly iterate on all of our experiments due to ToyBox’s fast performance and its simple API for editing state. In addition to highlighting ToyBox’s capacity for evaluating a single agent, we have shown how ToyBox may be used to evaluate models, by comparing the post-training performance ranking under test.
5 Related Work
We are hardly the first to suggest new or different benchmarks for deep RL (Kansky et al., 2017; Zhang et al., 2018; Wang et al., 2019). Four major qualities differentiate ToyBox from prior work: (1) it is based on a widely used and accepted community standard (ALE); (2) results on ALE can be replicated and compared in ToyBox, providing continuity to individual research trajectories; (3) a wide array of features of ToyBox environments are intervenable; furthermore, a particular configuration is easily exported and can be shared as part to further replication efforts; and (4) an individual game may be modified to produce a family of games, leading to a potentially infinite number of environments per-game; for example, the injection of real-world images into the background of Breakout described in (Zhang et al., 2018) would be trivial to implement in ToyBox.
Recall the available interventions in the traditional RL research environment shown in Figure 2. Most existing work manipulates the state input to the agent (i.e., the agent’s perception of state), the reward function, or the agent’s actions, e.g.:
Reward function: Hybrid reward structures decompose the reward function, making it easier for some agents to learn particularly difficult games in the Atari suite (Van Seijen et al., 2017).
These efforts can help combat overfitting, learning spurious correlations, or generally failing to make progress on a task. ToyBox is orthogonal to such efforts.
We have introduced relevant citations throughout the paper. Here we highlight critical work that was not otherwise mentioned.
Evaluation and Replication. Recent investigations into in how the community handles the evaluation and replication of agent performance has exposed some serious challenges that the community needs to address (Henderson et al., 2017; Balduzzi et al., 2018; Clary et al., 2018; Jordan et al., 2018). Environments such as ToyBox and evaluations of the style presented in Section 4 are one possible way to ameliorate issues surrounding replication, robustness, and variability.
Adversarial RL. Much work on adversarial RL focuses on exploiting decision boundaries (Mandlekar et al., 2017), adding nonrandom noise to state input for the purpose of altering or misdirecting policies (Huang et al., 2017), or introducing additional agents to apply adversarial force during training to produce agents with more robust policies in physics simulations (Pinto et al., 2017).
Saliency maps. Saliency maps were developed as an insight into model behavior (Simonyan et al., 2013), but more recently have been put forth as tool for explainability (Greydanus et al., 2018). We show that in at least one case, saliency maps can be misleading, due in part to bias exemplified by the researcher’s models of the agent and the environment as shown in Fig. 2. Experiments enabled by ToyBox provide much more specific information and can disambiguate competing hypotheses about agent behavior.
This paper is a proof-of-concept for experimentation about the behavior of deep RL agents. However, there are many possible applications beyond this type of post-training testing:
Rejection sampling/dynamic analysis. One of the biggest strengths offered in ToyBox is the ability to answer arbitrary questions about the environment structures and code at any time. Agents may encounter local minima during training that are not representative of the target deployment distribution, due to factors such as random seeds (Irpan, 2018). Training replicates with many random seeds is a costly solution to this problem. Instead, researchers could use ToyBox to monitor environmental features to test whether an agent is spending too much time in an undesirable state. Similar types of model monitoring could be used to identify “detachment,” a condition of the agent-environment interaction that induces catastrophic forgetting (Kirkpatrick et al., 2017; Ecoffet et al., 2018).
Datasets from game families. The ability to generate a family of games with similar but different mechanics provides a convenient dataset which can be used in a variety of ways. For example, with ToyBox, a researcher can define a family of Breakout-style games with slightly different movement physics (e.g., ball velocity and acceleration, paddle-bounce mechanics) sampled from some real-valued parameter domain. This can be used to create a train/test split over environments (Cobbe et al., 2018)
, for supporting transfer learning experiments from one game physics to another(Taylor & Stone, 2009), or for testing generalization across multiple environments (Guestrin et al., 2003).
Adversarial testing. With total control over environment dynamics, trained agents can be stress-tested by running the agent on progressively more difficult versions of the game. Tests of this form can serve to disambiguate agent behavior that can be explained in multiple ways—much as ToyBox’s more advanced Amidar movement protocols revealed agents had not necessarily learned to avoid enemies so much as memorize their observed paths. Difficulty can be increased by increasing stochasticity in the environment, or increase the speed or accuracy of adversarial game elements. Similar methods could be used to create a suite of curriculum learning environments.
We have shown that ToyBox unlocks novel and important capabilities for evaluating deep reinforcement learning agents. We introduce a new paradigm for thinking about evaluating agents, in the style of acceptance testing. We demonstrate ToyBox capabilities with four case studies and outline a variety of other applications.
- Balduzzi et al. (2018) Balduzzi, D., Tuyls, K., Perolat, J., and Graepel, T. Re-evaluating evaluation. In Advances in Neural Information Processing Systems, 2018.
Bellemare et al. (2012)
Bellemare, M. G., Veness, J., and Bowling, M.
Investigating contingency awareness using atari 2600 games.
Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012.
- Bellemare et al. (2013) Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research, 47:253–279, jun 2013.
- Braylan et al. (2015) Braylan, A., Hollenbeck, M., Meyerson, E., and Miikkulainen, R. Frame skip is a powerful parameter for learning to play atari. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
- Burda et al. (2019) Burda, Y., Edwards, H., Storkey, A., and Klimov, O. Exploration by random network distillation. In International Conference on Learning Representations, 2019.
- Clary et al. (2018) Clary, K., Tosch, E., Foley, J., and Jensen, D. Let’s Play Again: Variability of Deep Reinforcement Learning Agents in Atari Environments. In NeurIPS 2018 Workshop on Critiquing and Correcting Trends in Machine Learning, 2018.
- Cobbe et al. (2018) Cobbe, K., Klimov, O., Hesse, C., Kim, T., and Schulman, J. Quantifying generalization in reinforcement learning. arXiv preprint arXiv:1812.02341, 2018.
- Dhariwal et al. (2017) Dhariwal, P., Hesse, C., Klimov, O., Nichol, A., Plappert, M., Radford, A., Schulman, J., Sidor, S., and Wu, Y. OpenAI Baselines. https://github.com/openai/baselines, 2017.
- Diuk et al. (2008) Diuk, C., Cohen, A., and Littman, M. L. An object-oriented representation for efficient reinforcement learning. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pp. 240–247, New York, NY, USA, 2008. ACM. ISBN 978-1-60558-205-4. doi: 10.1145/1390156.1390187.
- Dubey et al. (2018) Dubey, R., Agrawal, P., Pathak, D., Griffiths, T. L., and Efros, A. A. Investigating human priors for playing video games. arXiv preprint arXiv:1802.10217, 2018.
- Dulac-Arnold et al. (2012) Dulac-Arnold, G., Denoyer, L., Preux, P., and Gallinari, P. Fast reinforcement learning with large action sets using error-correcting output codes for mdp factorization. In Flach, P. A., De Bie, T., and Cristianini, N. (eds.), Machine Learning and Knowledge Discovery in Databases, pp. 180–194, Berlin, Heidelberg, 2012. Springer Berlin Heidelberg.
- Dulac-Arnold et al. (2015) Dulac-Arnold, G., Evans, R., van Hasselt, H., Sunehag, P., Lillicrap, T., Hunt, J., Mann, T., Weber, T., Degris, T., and Coppin, B. Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679, 2015.
- Ecoffet et al. (2018) Ecoffet, A., Huizinga, J., Lehman, J., Stanley, K. O., and Clune, J. Montezuma’s Revenge Solved by Go-Explore, a New Algorithm for Hard-Exploration Problems (Sets Records on Pitfall, Too). https://eng.uber.com/go-explore/, Nov. 2018.
- Farquhar et al. (2018) Farquhar, G., Rocktaschel, T., Igl, M., and Whiteson, S. Treeqn and atreec: Differentiable tree-structured models for deep reinforcement learning. In ICLR 2018: Proceedings of the Sixth International Conference on Learning Representations, April 2018.
- Greydanus et al. (2018) Greydanus, S., Koul, A., Dodge, J., and Fern, A. Visualizing and understanding atari agents. arXiv preprint arXiv:1711.00138, 2018.
- Guestrin et al. (2003) Guestrin, C., Koller, D., Gearhart, C., and Kanodia, N. Generalizing plans to new environments in relational mdps. In Proceedings of the 18th international joint conference on Artificial intelligence, pp. 1003–1010. Morgan Kaufmann Publishers Inc., 2003.
- Henderson et al. (2017) Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., and Meger, D. Deep Reinforcement Learning that Matters. In AAAI Conference on Artificial Intelligence (AAAI). arXiv preprint 1709.06560, 2017.
- Hessel et al. (2018) Hessel, M., Modayil, J., Van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., and Silver, D. Rainbow: Combining improvements in deep reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Huang et al. (2017)
Huang, S., Papernot, N., Goodfellow, I., Duan, Y., and Abbeel, P.
Adversarial attacks on neural network policies.In ICLR Workshop, 2017.
- Irpan (2018) Irpan, A. Deep reinforcement learning doesn’t work yet. https://www.alexirpan.com/2018/02/14/rl-hard.html, 2018.
- Jiang et al. (2015) Jiang, N., Kulesza, A., Singh, S., and Lewis, R. The dependence of effective planning horizon on model accuracy. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 1181–1189. International Foundation for Autonomous Agents and Multiagent Systems, 2015.
- Jordan et al. (2018) Jordan, S. M., Cohen, D., and Thomas, P. S. Using Cumulative Distribution Based Performance Analysis to Bechmark Models. In NeurIPS 2018 Workshop on Critiquing and Correcting Trends in Machine Learning, 2018.
- Kansky et al. (2017) Kansky, K., Silver, T., Mély, D. A., Eldawy, M., Lázaro-Gredilla, M., Lou, X., Dorfman, N., Sidor, S., Phoenix, S., and George, D. Schema networks: Zero-shot transfer with a generative causal model of intuitive physics. In Proceedings of the 34th International Conference on Machine Learning, 2017.
- Keramati et al. (2018) Keramati, R., Whang, J., Cho, P., and Brunskill, E. Strategic object oriented reinforcement learning. CoRR, abs/1806.00175, 2018.
- Kirkpatrick et al. (2017) Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, pp. 201611835, 2017.
- Lake et al. (2017) Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 2017.
- Machado et al. (2017) Machado, M. C., Bellemare, M. G., Talvitie, E., Veness, J., Hausknecht, M. J., and Bowling, M. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. CoRR, abs/1709.06009, 2017.
- Mandlekar et al. (2017) Mandlekar, A., Zhu, Y., Garg, A., Fei-Fei, L., and Savarese, S. Adversarially robust policy learning: Active construction of physically-plausible perturbations. In Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on, pp. 3932–3939. IEEE, 2017.
- Melnik et al. (2018) Melnik, A., Fleer, S., Schilling, M., and Ritter, H. Modularization of end-to-end learning: Case study in arcade games. In NeurIPS 2018 Workshop on Causal Learning, 2018.
- Mnih et al. (2013) Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
- Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level Control through Deep Reinforcement Learning. Nature, 518:529 EP –, 02 2015.
- Mnih et al. (2016) Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., and Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning. In International conference on Machine Learning, pp. 1928–1937, 2016.
- Nair et al. (2015) Nair, A., Srinivasan, P., Blackwell, S., Alcicek, C., Fearon, R., De Maria, A., Panneershelvam, V., Suleyman, M., Beattie, C., Petersen, S., et al. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.
- Nichol et al. (2018) Nichol, A., Pfau, V., Hesse, C., Klimov, O., and Schulman, J. Gotta learn fast: A new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720, 2018.
- Pinto et al. (2017) Pinto, L., Davidson, J., Sukthankar, R., and Gupta, A. Robust adversarial reinforcement learning. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 2817–2826, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
- Raghu et al. (2017) Raghu, M., Irpan, A., Andreas, J., Kleinberg, R., Le, Q. V., and Kleinberg, J. Can Deep Reinforcement Learning solve Erdos-Selfridge-Spencer Games? arXiv preprint arXiv:1711.02301, 2017.
- Raychev et al. (2016) Raychev, V., Bielik, P., Vechev, M., and Krause, A. Learning programs from noisy data. In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2016), pp. 761–774. ACM, 2016.
- Salimans & Chen (2018) Salimans, T. and Chen, R. Learning Montezuma’s Revenge from a Single Demonstration. In NeurIPS 2018 Deep Reinforcement Learning Workshop, 2018.
- Schulman et al. (2017) Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal Policy Optimization Algorithms. arXiv preprint arXiv:1707.06347, 2017.
- Seita (2016) Seita, D. Frame Skipping and Pre-Processing for Deep Q-Networks on Atari 2600 Games. http://tiny.cc/d7uh2y, 2016.
- Simonyan et al. (2013) Simonyan, K., Vedaldi, A., and Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps, 2013.
- Sutton et al. (1998) Sutton, R. S., Barto, A. G., et al. Reinforcement learning: An introduction. MIT press, 1998.
- Szegedy et al. (2014) Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
- Taylor & Stone (2009) Taylor, M. E. and Stone, P. Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res., 10:1633–1685, December 2009. ISSN 1532-4435.
- Van Hasselt et al. (2016) Van Hasselt, H., Guez, A., and Silver, D. Deep reinforcement learning with double q-learning. In AAAI, volume 2, pp. 5, 2016.
- Van Seijen et al. (2017) Van Seijen, H., Fatemi, M., Romoff, J., Laroche, R., Barnes, T., and Tsang, J. Hybrid reward architecture for reinforcement learning. In Advances in Neural Information Processing Systems, pp. 5392–5402, 2017.
- Wang et al. (2019) Wang, R., Lehman, J., Clune, J., and Stanley, K. O. Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions. arXiv preprint arXiv:1901.01753, 2019.
- Wu et al. (2017) Wu, Y., Mansimov, E., Grosse, R. B., Liao, S., and Ba, J. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In Advances in neural information processing systems, pp. 5279–5288, 2017.
- Zhang et al. (2018) Zhang, A., Wu, Y., and Pineau, J. Natural environment benchmarks for reinforcement learning. arXiv Preprint arXiv:1811.06032, 2018.