1 Bold Hearts
The team Bold Hearts has been founded as part of the Adaptive Systems Research Group at the University of Hertfordshire. The team started participating in RoboCup in 2003 in the simulation leagues and made a transition to the Humanoid KidSize League in 2013. We hope to participate in that league in 2019 for the seventh year in a row.
The following are the main achievements of team Bold Hearts in the Humanoid League over the last few years.
Quarter-finalist RoboCup World Championchip 2017 (1st in group)
2nd round RoboCup World Championship 2016 (1st in group)
1st Iran Open 2016
2nd round RoboCup World Championship 2015 (1st in group)
3rd German Open 2015
2nd RoboCup World Championship 2014
2 Introducing New Members Gradually
Recruiting new members is a crucial task, as with most RoboCup teams. It is important for the team’s overall success to continuously recruit new members and transfer knowledge to the new generations.
We have always kept a well maintained wiki for new members to read up on the given infrastructure. Additionally, we have made it easier to set up the code by using tools such as Ansible and Docker111https://www.ansible.com/ and https://www.docker.com/. However, there has been a lack of students with C++ knowledge. This is the language of choice for our custom framework, which is presented in previous team description papers [tdp-18]. Additionally, robotics itself is very complex, therefore working with real robots without pre-knowledge is a challenge in itself.
We decided to tackle this issue on different levels. Firstly, we plan on moving to a framework which allows modular development in Python and C++, see section LABEL:sec:middleware for more details.
Students then need to understand the level of complexity of robotic tasks. Some known contributors to the RoboCup community approached this issue gradually themselves, by firstly participating in, e.g., simulation league and only later entering the hardware league. We want to emulate this locally, by offering students a very simple and accessible idea of RoboCup with a simple, standalone 2D simulator called PythoCup. It is written in Python and was firstly developed for the Humanoid soccer school 2013 by our team. It has now been adapted for the use of PyGame and is published on GitLab222https://gitlab.com/boldhearts/pythocup.
Two scenes of a PythoCup game. The left screenshot shows the moment before the game starts. The right screenshot shows the blue player attacking the goal of the red player.
We expect several benefits for new students/participants and existing team members. Setting up PythoCup is simple and manipulating the behaviour of the robots can be achieved in a few steps, yet it offers the possibility to achieve already quite sophisticated agents. We hope that this will ease the process for new members joining the team. Additionally, we expect that some important robotic related problems will derive naturally and therefore we hope that new members get a glimpse of RoboCup related problems quickly. Another benefit is that those who have learnt these skills can then help introduce new members, creating a pyramid of experience.
After mastering PythoCup, the next step will be to allow students to set up our code for the humanoid robots. They will be given an isolated modular problem to solve. Testing will be done in the simulator (e.g. Gazebo) and already small changes will yield different, visible output.
3 Robotic Hardware and Design
As described in our last years’ team description paper, we started with incrementally developing a new robot platform based on the Darwin-OP [tdp-18].
The main processing unit has been replaced with an Odroid-XU4. Shins, legs, foot plates, head bracket and arms have been redesigned and 3D printed. Figure 2 shows one of our robots with its newly designed parts. At the RoboCup 2018 competition, we participated with 4 robots of that configuration. The robots were equipped with four different webcam models: Logitech C910, C920, C920c and C930e. For this year’s configuration, we decided on using Logitech C920 Pro HD webcam for all robots.
For the self-printed limbs, we mainly used PLA and ABS and a range of different printers. PLA seems to be most sturdy and well suited for our needs. One of our biggest challenges during the RoboCup 2018 competition was mounting the printed parts to the servos. The plastic parts had to resist a lot of stress on a small area of contact of the screws. When the plastic parts broke, they usually did so around the horn mounting area.
To help address this, we use the outer horn disc as additional support for the motors, as seen in Fig. 3. This gives greater support in 180 degrees (towards the model) where the parts are typically stressed. Despite being thinner, the parts are stronger as the force is spread more evenly across the model.
In the near future, we plan to investigate the use of metal inserts to further increase the strength of printed parts in the form of: small washers per screw, a larger washer inserted into the model and an embedded nut per screw.
For all our designs we utilise OpenSCAD, a tool which allows for parametric designs, enabling us to adapt the length of a limb without redesign. Like the Darwin, all BoldBot servos are Dynamixel MX-28. It turned out that, with increasing the robot size, these servos are already too weak for the robot to stand up or to locomote. We therefore also parametrised Dynamixel servos for the models MX-28, MX-64 and MX-106333Open-sourced here: https://gitlab.com/boldhearts/dynamixel-scad with OpenSCAD. This enables us to redesign limbs with reference to the used model more easily.
In previous years, our object recognition methods were based on a lookup-table (LUT) approach. The LUT was created based on thresholds in HSV colour space, that were manually tweaked for each separate competition and/or field. Besides being time consuming during setup, the method was no longer very applicable in the modern non-colour-coded RoboCup scenario.
The hardware upgrade that our robots received allows the application of more advanced computer vision methods, however it is not yet feasible to run some of the latest large-scale deep learning models. We managed to scale such models down to be able to run fast enough on our mobile hardware with sufficient accuracy, the full details of which were presented at the RoboCup 2018 symposium[DijkScheunemann-18]. Here we will summarise this work.
Rather than using a direct object recognition approach, similar to the popular YOLO and RCNN family of CNNs, which are too complex to run or need a highly optimised domain-specific candidate selection process, we use the more general method of semantic segmentation. Besides being able to process a full resolution frame faster, without requiring specific domain knowledge, this method has the additional benefit of a single network being able to handle multiple image resolutions without retraining. Finally, the output is a per-pixel labelling of the image, equivalent to the output of traditional LUT based methods, so it fits seamlessly into our existing pipeline.
The neural networks that we use have an ecoder-decoder structure similar to other, large-scale, segmentation networks in the literature, such as U-Net[ronneberger2015u] and SegNet [badrinarayanan2017segnet]. In such networks, a first series of convolution layers encode the input into successively lower resolution but higher dimensional feature maps, after which a second series of layers decode these maps into a full-resolution pixelwise classification. This architecture is shown in Fig. LABEL:fig:cnn.