This work presents an application of rl for the complete control of real soccer robots of the vsss 11, a traditional league in the larc. In the vsss league, two teams of three small robots play against each other. We propose a simulated environment in which continuous or discrete control policies can be trained, and a Sim-to-Real method to allow using the obtained policies to control a robot in the real world. The results show that the learned policies display a broad repertoire of behaviors which are difficult to specify by hand. This approach, called VSSS-RL, was able to beat the human-designed policy for the striker of the team ranked 3rd place in the 2018 larc, in 1-vs-1 matches.
2 Research Problem
The vsss robots are usually programmed to behave adequately in every situation identified by the programmers, employing path planning, collision avoidance, and PID control methods Kim et al. (2004). However, it is extremely hard to foreseen and tackle every possible situation in a dynamic game such as soccer. Therefore, it is clear the need for data-oriented approaches such as rl.
However, several barriers exist for applying rl successfully in the real world Dulac-Arnold et al. (2019), as the large amounts of interactions required by the agents to achieve adequate performance are impractical due to degradation of hardware, energy consumption and time required. Thus, the research problem considered in this work is the application of the Sim-to-Real approach, in which the agents are trained in simulation and policies learned are transferred to the real robots.
Deep rl is a suitable approach for learning control and complex behaviors by interacting with the environment since it requires only the specification of a reward function that expresses the desired goals. In the literature of robot soccer, rl has been applied for learning specific behaviors, such as kicking Riedmiller and Gabel (2007) and scoring penalty goals Hester et al. (2010).
Recently, two rl soccer simulation environments have been proposed: MuJoCo Soccer Todorov et al. (2012) and Google Research Football Kurach et al. (2019). However, they are not suitable for the study of Sim-to-Real, because they either do not consider important physical and dynamical aspects or represent a very complex scenario that is not achievable by current robotics technology. Therefore, the need for such an adequate environment, allowing the study of the combination of rl with Sim-to-Real in dynamic, multi-agent, competitive, and cooperative situations, is the main motivation behind this work.
4 Technical Contribution
We propose a simulated environment called vsss-RL111Source code will be available soon at: https://github.com/robocin/vss-environment, which supports continuous or discrete control policies. It includes a customized version of the VSS SDK simulator 13 and builds a set of wrapper modules to be compatible with the OpenAI Gym standards Brockman et al. (2016). It consists of two main independent processes: the experimental, and the training process. In the first, an OpenAI Gym environment parser was developed, and wrapper classes were implemented to communicate with the agents. In the latter, the collected experiences are stored in an experience buffer that is used to update the policies, as illustrated in Fig. 2.
We also proposed a Sim-to-Real method to transfer the obtained policies to a robot in the real world. It is a Domain Adaptation method Andrychowicz et al. (2018)
, consisting of a Feed-Forward Neural Network which learns to map the desired high-level actions(linear and angular speeds) to low-level control commands for the wheel speeds ( and ) (Fig. 2).
4.1 Experimental Results
The results, submitted to ICRA2020, show that the two baseline rl methods evaluated, ddpg Lillicrap et al. (2015) and dqn Volodymyr et al. (2013), were able to learn suitable policies in simulation when applying reward shaping Sutton et al. (1998). The learned polices display rich and complex behaviors222See the video available at: https://youtu.be/a9dTMtanh-U
extremely difficult to specify by hand as well as to identify the correct moments when they should be applied. Moreover, the proposed Sim-to-Real method employed allowed us to achieve similar results in the real world in terms of average steps to score a goal (in simulation and in the real world).
Finally, the complete approach was evaluated in 1-vs-1 matches against the striker of RoboCIn VSSS team, 3rd place on the LARC 2018. The final scores of the matches were 19 for VSSS-RL and 13 for RoboCIn in the first game, and 22 for VSSS-RL approach and 17 for RoboCIn in the second. These wins highlight the capabilities of the proposed approach.
- Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177. Cited by: §4.
- OpenAI gym. External Links: Cited by: §4.
- Challenges of real-world reinforcement learning. arXiv preprint arXiv:1904.12901. Cited by: §2.
- Generalized model learning for reinforcement learning on a humanoid robot. In 2010 IEEE International Conference on Robotics and Automation, pp. 2369–2374. Cited by: §3.
- Soccer robotics. Vol. 11, Springer Science & Business Media. Cited by: §2.
- Google research football: a novel reinforcement learning environment. arXiv preprint arXiv:1907.11180. Cited by: §3.
- Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Cited by: §4.1.
- On experiences in a complex and competitive gaming domain: reinforcement learning meets robocup. In 2007 IEEE Symposium on Computational Intelligence and Games, pp. 17–23. Cited by: §3.
- Introduction to reinforcement learning. Vol. 2, MIT press Cambridge. Cited by: §4.1.
- Mujoco: a physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. Cited by: §3.
-  (2019) Very Small Size Soccer Rules. Note: http://www.cbrobotica.org/wp-content/uploads/2014/03/VerySmall2008_en.pdf[Online; accessed 26-May-2019] Cited by: §1.
Playing atari with deep reinforcement learning.
NIPS Deep Learning Workshop, Cited by: §4.1.
-  (2019) VSS SDK. Note: https://vss-sdk.github.io/book/general.html[Online; accessed 5-June-2019] Cited by: Figure 1, §4.