Hibikino-Musashi@Home 2020 Team Description Paper

05/29/2020 ∙ by Tomohiro Ono, et al. ∙ 0

Our team, Hibikino-Musashi@Home (HMA), was founded in 2010. It is based in Japan in the Kitakyushu Science and Research Park. Since 2010, we have annually participated in the RoboCup@Home Japan Open competition in the open platform league (OPL). We participated as an open platform league team in the 2017 Nagoya RoboCup competition and as a domestic standard platform league (DSPL) team in the 2017 Nagoya, 2018 Montreal, and 2019 Sydney RoboCup competitions. We also participated in the World Robot Challenge (WRC) 2018 in the service-robotics category of the partner-robot challenge (real space) and won first place. Currently, we have 20 members from eight different laboratories within the Kyushu Institute of Technology. In this paper, we introduce the activities that have been performed by our team and the technologies that we use.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Our team, Hibikino-Musashi@Home (HMA), was founded in 2010, and we have been competing annually in the RoboCup@Home Japan Open competition in the open platform league (OPL). Our team is developing a home-service robot, and we intend to demonstrate our robot in this event in 2020 to present the outcome of our latest research.

In RoboCup 2017 Nagoya, we participated both in the OPL and the domestic standard platform league (DSPL) and in the RoboCup 2018 Montreal and RoboCup 2019 Sydney we participated in the DSPL. Additionally, in the World Robot Challenge (WRC) 2018, we participated in the service-robotics category of the partner-robot challenge (real space).

In the RoboCup 2017, 2018 and 2019 competitions and in the WRC 2018, we used a Toyota human support robot (HSR) [toyota_hsr]. We were awarded the first prize at the WRC 2018 and third prize at the RoboCup 2019.

In this paper, we describe the technologies used in our robot. In particular, this paper outlines our object recognition system that uses deep learning

[hinton2006fast]

, improves the speed of HSR, and has a brain-inspired artificial intelligence model, which was originally proposed by us and is installed in our HSR.

2 System overview

Figure 1 presents an overview of our HSR system. We have used an HSR since 2016. In this section, we will introduce the specifications of our HSR.

2.1 Hardware overview

We participated in RoboCup 2018 Montreal and 2019 Sydney with this HSR. The computational resources built into the HSR were inadequate to support our intelligent systems and were unable to extract the maximum performance from the system. To overcome this limitation, using an Official Standard Laptop for DSPL that can fulfill the computational requirements of our intelligent systems has been permitted since RoboCup 2018 Montreal. We use an ALIENWARE (Intel Core i7-8700K CPU, 32GB RAM, and GTX-1080 GPU) as the Official Standard Laptop for DSPL. Consequently, the computer equipped inside the HSR can be used to run basic HSR software, such as sensor drivers, motion planning and, actuator drivers. This has increased the operational stability of the HSR.

2.2 Software overview

Figure 1: Block diagram overview of our HSR system. [HSR, human-support robot; ROS, robot operating system]

In this section, we introduce the software installed in our HSR. Figure 1 shows the system installed in our HSR. The system is based on the Robot Operating System [ros]. In our HSR system, laptop computer and a cloud service, if a network connection is available, are used for system processing. The laptop is connected to a computer through an Hsrb interface. The built-in computer specializes in low-layer systems, such as HSR sensor drivers, motion planning, and actuator drivers, as shown in Fig. 1 (c) and (d). Furthermore, the built-in computer has a sound localization system that use HARK [hark], as shown in Fig. 1 (e).

3 Object recognition

In this section, we explain the object recognition system (shown in Fig. 1 (a)), which is based on you look only once (YOLO) [redmon2016you].

To train YOLO, a complex annotation phase is required for annotating labels and bounding boxes of objects. In the RoboCup@Home competition, predefined objects are typically announced during the setup days right before the start of the competition days. Thus, we have limited time to train YOLO during the competition, and the annotation phase impedes the use of the trained YOLO during the competition days.

We utilize an autonomous annotation system for YOLO using a three-dimensional (3D) scanner. Figure 2 shows an overview of the proposed system.

Figure 2: Overview of proposed autonomous annotation system for YOLO.

In this system, QLONE [qlone], a smartphone application capable of 3D scanning, is used. QLONE makes it easy to create 3D models by placing objects on dedicated markers and shooting them. We placed the marker and object on a turntable and created a 3D model. In this method, the bottom surface of the object could not be shoot; thus, two 3D models can be created for each object by acquiring the flipped upside-down object.

Figure 3 shows the processing flow to generate training images for YOLO.

Figure 3: Processing flow for generating training images for YOLO.

Multi-viewpoint images are automatically generated from the created two 3D models (Fig. 3 (a)). Then, we remove image backgrounds (Fig. 3 (b)).

For backgrounds of the training images, we shoot background images, for example, a table, shelf, and other items. To adapt to various lighting conditions, we apply the automatic color equation algorithm [RIZZI20031663] to the background images (Fig. 3 (c)). To incorporate the object images into the background images, we define 20-25 object locations on the background images (the number of object locations depends on the background images). Then, by placing the object images on the defined object locations autonomously, the training images for YOLO are generated (Fig. 3 (d)). If there are 15 class objects and 306 background images, 400,000 training images are generated. Additionally, annotation data for the training images are generated autonomously because object labels and positions are known.

Image generation requires  15 min (using six CPU cores in parallel), and training of YOLO requires approximately 6 h when using the GTX1080 GPU on a standard laptop. Even though the generated training data are artificial, recognition of YOLO in actual environments works. The accuracy when learning 10,000 epochs is 60.72% in a mean average precision (mAP) evaluation.

4 High-speed behavioral synthesis

We are working to improve the speed of HSR from two viewpoints: behavioral synthesis and software processing speed.

Regarding behavioral synthesis, we reduce the wasted motion by combining and synthesizing several behaviors for the robot. For instance, by moving each joint of the arm during navigation, the robot can move to the next action such as grasping without wasting any time as soon as the robot reaches an interim target location.

Regarding the processing speed, we aim to operate all software at 30 Hz or higher. To reduce the waiting time for software processing, which causes the robot to stop, the essential functions of the home service robot, such as object recognition and object grasping-point estimation, need to be executed in real time. We optimized these functions for the Tidy Up Here task.

We used two optimized methods for that task in the WRC 2018 (Fig. 4). In the WRC 2018 results, for which we won first place, our achieved speedup was approximately 2.6 times the prior record. Our robot can tidy up within 34 s per object; thus, so we expect that it can tidy up approximately 20 objects in 15 min.

Figure 4: Speed comparison between a conventional system and the proposed high-speed system.

5 Brain-inspired artificial intelligence model

In this section, we explain a brain-inspired artificial intelligence model that consists of a visual cortex model, an amygdala model, a hippocampus model, and a prefrontal cortex model [tanaka2019biai].

It is expected that home service robots have local knowledge which is based on the experiences of the robots. In the case of acquiring local knowledge, its learning is executed during the daily life of the robots. Thus, applying only deep learning [hinton2006fast]

to acquire local knowledge is not effective because the robot cannot prepare big data of the local knowledge. To acquire local knowledge, we propose an artificial intelligence model that is inspired by the structure of the brain because our brain can acquire local knowledge from only few data. Mainly, we focus on an amygdala, a hippocampus, and a prefrontal cortex, and the proposed model integrates functions of these parts of the brain. In addition, we integrated a deep neural network as a visual cortex model into the proposed model.

Figure 5 shows the proposed model.

Figure 5: A brain-inspired artificial intelligence model that consists of a visual cortex model, an amygdala model, a hippocampus model, and a prefrontal cortex model.

The visual cortex model consists of YOLO and a Point Cloud Library (PCL) [rusu2011pcl]. The visual cortex model recognizes an environment and outputs the label of detected object and its position. The object label is input into the amygdala model. We use the amygdala model for value judgments. The amygdala model consists of a lateral nucleus (LA) model and a central nucleus (CE) model [tanaka2019amygdala]

. The LA and the CE judge the value of the object. Only if the value of the object is high enough, the object label and the object position are input into the hippocampus model. We use the hippocampus model for event coding. The hippocampus model consists of cue cells, time cells and social place cells as an internal model of detected events. The cue cells and time cells represent what and when events happen, respectively. The social place cells represent where events happen. The cue cells receive the object label and the social place cell receives the object position. Then, the hippocampus model integrates the outputs of these cells and computes an event vector. The event vector is input into the prefrontal cortex model. We use the prefrontal cortex model for event predictions. The prefrontal cortex model is an echo state network (ESN)

[jaeger2001echostate]. The ESN trains the time-series event vector to predict future events. After training, the ESN predicts time-series events without input from environments.

We evaluated the proposed model using the following experiment. A person walked across in front of a robot, as shown in Fig. 6. The robot detected the person and learned the trajectory of the person which was a type of local knowledge. Subsequently, the robot predicted the trajectory. In addition, the robot added the predicted trajectory as an imaginary potential of a map for SLAM. Figure 7 shows the imaginary potential of the predicted trajectory. By using the imaginal potential, the robot was able to avoid the person who walked across in front of the robot. Therefore, the proposed model acquired the local knowledge.

Figure 6: Trajectory of a person.
Figure 7: Imaginary potential of the predicted trajectory.

6 Conclusions

In this paper, we summarized the available information about our HSR, which we entered into RoboCup 2019 Sydney. The object recognition and improved speed of the HSR that we built into the robot were also described. Currently, we are developing many different pieces of software for an HSR that will be entered into RoboCup 2020 Bordeaux.

Acknowledgment

This work was supported by Ministry of Education, Culture, Sports, Science and Technology, Joint Graduate School Intelligent Car & Robotics course (2012-2017), Kitakyushu Foundation for the Advancement of Industry Science and Technology (2013-2015), Kyushu Institute of Technology 100th anniversary commemoration project : student project (2015, 2018-2019) and YASKAWA electric corporation project (2016-2017), JSPS KAKENHI grant number 17H01798 and 19J11524, and the New Energy and Industrial Technology Development Organization (NEDO).

References

Appendix 1: Competition results

Country Competition Result

Japan
RoboCup 2017 Nagoya @Home DSPL 1st
@Home OPL 5th

Japan
RoboCup Japan Open 2018 Ogaki @Home DSPL 2nd
@Home OPL 1st
JSAI Award

Canada
RoboCup 2018 Montreal @Home DSPL 1st
P&G Dishwasher Challenge Award

Japan
World Robot Challenge 2018 Service Robotics Category
Partner Robot Challenge Real Space 1st
METI Minister’s Award, RSJ Special Award

Australia
RoboCup 2019 Sydney @Home DSPL 3rd

Japan
RoboCup Japan Open 2019 Nagaoka @Home DSPL 1st
@Home OPL 1st
Table 1: Results of recent competitions. [DSPL, domestic standard-platform league; JSAI, Japanese Society for Artificial Intelligence; METI, Ministry of Economy, Trade and Industry (Japan); OPL, open-platform league; RSJ, Robotics Society of Japan]

Table 1 shows the results achieved by our team in recent competitions. We have participated in the RoboCup and World Robot Challenge for several years, and as a result, our team has won prizes and academic awards.

Notably, we participated in the RoboCup 2019 Sydney using the system described herein. We were able to demonstrate the performance of HSR and our technologies. Thanks to these results, we were awarded the third prize in that competition.

Appendix 2: Link to Team Video, Team Website

Appendix 3: Robot’s Software Description

For our robot we are using the following software:

  • OS: Ubuntu 16.04.

  • Middleware: ROS Kinetic.

  • State management: SMACH (ROS).

  • Speech recognition (English):

    • rospeex [rospeex].

    • Web Speech API.

    • Kaldi.

  • Morphological Analysis Dependency Structure Analysis (English): SyntaxNet.

  • Speech synthesis (English): Web Speech API.

  • Speech recognition (Japanese): Julius.

  • Morphological Analysis (Japanese): MeCab.

  • Dependency structure analysis (Japanese): CaboCha.

  • Speech synthesis (Japanese): Open JTalk.

  • Sound location: HARK.

  • Object detection: point cloud library (PCL) and you only look once (YOLO) [redmon2016you].

  • Object recognition: YOLO.

  • Human detection / tracking:

    • Depth image + particle filter.

    • OpenPose [cao2017realtime].

  • SLAM: hector_slam (ROS).

  • Path planning: move_base (ROS).