Bold Hearts Team Description for RoboCup 2019 (Humanoid Kid Size League)

04/22/2019 ∙ by Marcus M. Scheunemann, et al. ∙ 0

We participated in the RoboCup 2018 competition in Montreal with our newly developed BoldBot based on the Darwin-OP and mostly self-printed custom parts. This paper is about the lessons learnt from that competition and further developments for the RoboCup 2019 competition. Firstly, we briefly introduce the team along with an overview of past achievements. We then present a simple, standalone 2D simulator we use for simplifying the entry for new members with making basic RoboCup concepts quickly accessible. We describe our approach for semantic-segmentation for our vision used in the 2018 competition, which replaced the lookup-table (LUT) implementation we had before. We also discuss the extra structural support we plan to add to the printed parts of the BoldBot and our transition to ROS 2 as our new middleware. Lastly, we will present a collection of open-source contributions of our team.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 6

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Bold Hearts

The team Bold Hearts has been founded as part of the Adaptive Systems Research Group at the University of Hertfordshire. The team started participating in RoboCup in 2003 in the simulation leagues and made a transition to the Humanoid KidSize League in 2013. We hope to participate in that league in 2019 for the seventh year in a row.

The following are the main achievements of team Bold Hearts in the Humanoid League over the last few years.

  • Quarter-finalist RoboCup World Championchip 2017 (1st in group)

  • 2nd round RoboCup World Championship 2016 (1st in group)

  • 1st Iran Open 2016

  • 2nd round RoboCup World Championship 2015 (1st in group)

  • 3rd German Open 2015

  • 2nd RoboCup World Championship 2014

2 Introducing New Members Gradually

Recruiting new members is a crucial task, as with most RoboCup teams. It is important for the team’s overall success to continuously recruit new members and transfer knowledge to the new generations.

We have always kept a well maintained wiki for new members to read up on the given infrastructure. Additionally, we have made it easier to set up the code by using tools such as Ansible and Docker111https://www.ansible.com/ and https://www.docker.com/. However, there has been a lack of students with C++ knowledge. This is the language of choice for our custom framework, which is presented in previous team description papers [tdp-18]. Additionally, robotics itself is very complex, therefore working with real robots without pre-knowledge is a challenge in itself.

We decided to tackle this issue on different levels. Firstly, we plan on moving to a framework which allows modular development in Python and C++, see section LABEL:sec:middleware for more details.

Students then need to understand the level of complexity of robotic tasks. Some known contributors to the RoboCup community approached this issue gradually themselves, by firstly participating in, e.g., simulation league and only later entering the hardware league. We want to emulate this locally, by offering students a very simple and accessible idea of RoboCup with a simple, standalone 2D simulator called PythoCup. It is written in Python and was firstly developed for the Humanoid soccer school 2013 by our team. It has now been adapted for the use of PyGame and is published on GitLab222https://gitlab.com/boldhearts/pythocup.

Figure 1:

Two scenes of a PythoCup game. The left screenshot shows the moment before the game starts. The right screenshot shows the blue player attacking the goal of the red player.

We expect several benefits for new students/participants and existing team members. Setting up PythoCup is simple and manipulating the behaviour of the robots can be achieved in a few steps, yet it offers the possibility to achieve already quite sophisticated agents. We hope that this will ease the process for new members joining the team. Additionally, we expect that some important robotic related problems will derive naturally and therefore we hope that new members get a glimpse of RoboCup related problems quickly. Another benefit is that those who have learnt these skills can then help introduce new members, creating a pyramid of experience.

After mastering PythoCup, the next step will be to allow students to set up our code for the humanoid robots. They will be given an isolated modular problem to solve. Testing will be done in the simulator (e.g. Gazebo) and already small changes will yield different, visible output.

3 Robotic Hardware and Design

Figure 2: The BoldBot robot in its second version. It is incrementally developed with a Darwin-OP as its base. The torso has been scaled up to fit an Odroid-XU4, the new main processing unit. The shin, thigh, arm, head bracket and the foot plate have been redesigned, scaled up and 3D printed.

As described in our last years’ team description paper, we started with incrementally developing a new robot platform based on the Darwin-OP [tdp-18].

The main processing unit has been replaced with an Odroid-XU4. Shins, legs, foot plates, head bracket and arms have been redesigned and 3D printed. Figure 2 shows one of our robots with its newly designed parts. At the RoboCup 2018 competition, we participated with 4 robots of that configuration. The robots were equipped with four different webcam models: Logitech C910, C920, C920c and C930e. For this year’s configuration, we decided on using Logitech C920 Pro HD webcam for all robots.

For the self-printed limbs, we mainly used PLA and ABS and a range of different printers. PLA seems to be most sturdy and well suited for our needs. One of our biggest challenges during the RoboCup 2018 competition was mounting the printed parts to the servos. The plastic parts had to resist a lot of stress on a small area of contact of the screws. When the plastic parts broke, they usually did so around the horn mounting area.

To help address this, we use the outer horn disc as additional support for the motors, as seen in Fig. 3. This gives greater support in 180 degrees (towards the model) where the parts are typically stressed. Despite being thinner, the parts are stronger as the force is spread more evenly across the model.

In the near future, we plan to investigate the use of metal inserts to further increase the strength of printed parts in the form of: small washers per screw, a larger washer inserted into the model and an embedded nut per screw.

Figure 3: Depicted is the design of the thigh of the BoldBot model used in RoboCup 2018 (left).We redesigned this part with some additional support structure (middle). We will investigate whether the support for the horn and bearing will help to reduce the stress on the screws. The picture on the right shows the design of Dynamixel servos. These models can be used in OpenSCAD for designing limbs.

For all our designs we utilise OpenSCAD, a tool which allows for parametric designs, enabling us to adapt the length of a limb without redesign. Like the Darwin, all BoldBot servos are Dynamixel MX-28. It turned out that, with increasing the robot size, these servos are already too weak for the robot to stand up or to locomote. We therefore also parametrised Dynamixel servos for the models MX-28, MX-64 and MX-106333Open-sourced here: https://gitlab.com/boldhearts/dynamixel-scad with OpenSCAD. This enables us to redesign limbs with reference to the used model more easily.

4 Vision

In previous years, our object recognition methods were based on a lookup-table (LUT) approach. The LUT was created based on thresholds in HSV colour space, that were manually tweaked for each separate competition and/or field. Besides being time consuming during setup, the method was no longer very applicable in the modern non-colour-coded RoboCup scenario.

The hardware upgrade that our robots received allows the application of more advanced computer vision methods, however it is not yet feasible to run some of the latest large-scale deep learning models. We managed to scale such models down to be able to run fast enough on our mobile hardware with sufficient accuracy, the full details of which were presented at the RoboCup 2018 symposium 

[DijkScheunemann-18]. Here we will summarise this work.

Rather than using a direct object recognition approach, similar to the popular YOLO and RCNN family of CNNs, which are too complex to run or need a highly optimised domain-specific candidate selection process, we use the more general method of semantic segmentation. Besides being able to process a full resolution frame faster, without requiring specific domain knowledge, this method has the additional benefit of a single network being able to handle multiple image resolutions without retraining. Finally, the output is a per-pixel labelling of the image, equivalent to the output of traditional LUT based methods, so it fits seamlessly into our existing pipeline.

The neural networks that we use have an ecoder-decoder structure similar to other, large-scale, segmentation networks in the literature, such as U-Net 

[ronneberger2015u] and SegNet [badrinarayanan2017segnet]. In such networks, a first series of convolution layers encode the input into successively lower resolution but higher dimensional feature maps, after which a second series of layers decode these maps into a full-resolution pixelwise classification. This architecture is shown in Fig. LABEL:fig:cnn.

E1

E2

E3

E4

D1

D2

D3

D4

O