An implementation of an imitation game with ASD children to learn nursery rhymes

04/10/2020 ∙ by Sao Mai Nguyen, et al. ∙ 0

Previous studies have suggested that being imitated by an adult is an effective intervention with children with autism and developmental delay. The purpose of this study is to investigate if an imitation game with a robot can arise interest from children and constitute an effective tool to be used in clinical activities. In this paper, we describe the design of our nursery rhyme imitation game, its implementation based on RGB image pose recognition and the preliminary tests we performed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Several studies have shown that imitating ASD children can be efficient in enhancing their social behaviour [14]. These studies have been carried with caregivers as imitators of children’s movements [16, 11]. Upon these results, numerous projects have been using TIC including robots, for ASD care [17, 9]. For instance, project Gaming Open Library for Intervention in Autism at Home (GOLIAH) developed a series of 11 serious games on computers and tablets using imitation and joint attention [10]. Other projects have focused on facial expression with robot Keepon [12] or with the expressive robot Face [15]. For instance, Kaspar [6] can be used with the capacity to imitate drum rhythm.

[0pc] Diagram of the flow of the processes of the game

[2pc] Structure of the ROS nodes.

While most imitation studies have focused on facial expressions or sounds, few studies have focused on the imitation of movements. However, impairment in motor imitation abilities have been commonly described in children with ASD. How motricity in interpersonal coordination impacts imitation has little been investigated. Indeed, [18] has shown that both interpersonal synchronization and motor coordination subtly contribute the imitation impairment in ASD children. A human partner was shown to have an impact on the learning abilities of postures by a humanoid robot [8], and when it was interacting with participants, it was able to learn a social signature at the level of individual recognition [3]. These results are interesting, but are only limited to static postures and not yet dynamic movement as required in full motor control. Development of motor control requires forming an internal model of action relying on the coupling between action (motor commands) and perception (sensory feedback). Critical to the development of social, communicative, and motor coordination behaviors, internal model of action accurately predicts the sensory consequences of motor commands [13]. This is why in this study, we propose to examine how ASD children can learn to improve their motor control by a serious game. We propose to use a robot platform as they have been shown to arise their interest.

To allow them to learn motor control and gestures, we examined the program of clinical activities and identified that during their activities, nursing rhymes were sung and mimicked with therapists. We therefore propose to implement a robot for an imitation game based on a nursery rhyme. We describe our implementation and the test results.

2 General architecture of our system

We implemented a software for the humanoid robot Pepper to play an imitation game with a nursing rhyme. The robot would thus sing a line of the rhyme while making a gesture and displaying an image on its table, then check if the child imitates the gesture, while encouraging him and displaying an image of the nursery rhyme. This workflow is illustrated by fig. 1.

The system is composed of: (i) pose recognition node; (ii) a display node using Pepper’s tablet; (iii) a node to control the Pepper robot movements using Naoqi [2] through the ROS module MoveIt [5]; (iv) the audio player node to play the music line by line (v) the vocalisation node for text-to-speech. These nodes can communicate through ROS as shown in fig. 1. The role of the master node is to synchronise all nodes and to manage the behaviour of the robot.

3 Pose recognition

Our pose recognition system (fig. 3), is composed of :

  • the internal RGB camera of Pepper giving to the system the input video flow.

  • the OpenPose library [4]

    to estimate the pose of the human body using deep learning algorithm, and a classifier to recognise a predefined set of poses. For our application, we used the tensorflow implementation on CPU of openpose, instead of the more vanilla implementation for GPU. The openpose library can output two models of skeleton with either 25 or 18 joints called body-25 and coco, as illustrated in fig.

    3. In our case, as the nursery rhyme did not focus on the bottom body part, we opted for the coco model. To avoid false positives, we set the confidence threshold c to 0.5. The openpose library returns at each frame the skeleton features of the positions (x,y) of the 8 upper body joints (numbered 1 to 8) if its confidences on these joints are above c. These skeleton features are normalised as

    (1)

    where is the intersection of joints 1-2 and 3-4.

  • a classifier: we used Gaussian Mixture Classifier (GMM) with a dataset of 20 samples for each of the 8 class shown in fig. 4.1. In the work presented here, the GMM is used to classify poses, but GMM can also be used to classify dynamical movements, as well as evaluate an error in the movement so as to give an advice how to improve the movement, as described in [7] in a project of a robot coach for physical rehabilitation.

[-25pc] Architecture of the different modules to recognise poses and control the robot

[-4pc] Openpose can track human pose "skeletons" using either (a) body-25 or (b) coco model.

4 Evaluation

We report here the performance of the pose recognition system, and describe our test with an autistic child.

4.1 Performance of the pose recognition system

We performed an evaluation on the performance of our pose recognition system on the 8 classes of poses in the rhyme. They are shown in fig 4.1. The image of the database were captured by a webcam with a resolution of 640x480. We used for openpose the model CMU as it gives the best detects skeletons. Each class had 30 images. The train accuracy is 94.79% while the test accuracy is 94.59 %.

[-0pc] Poses used for the nursery rhyme and the skeletons detected by openpose.

4.2 Test with a subject

We tested the idea of a nursery rhyme game with an autistic children before fully implementing our pose recognition system. For this test, we used a wizard of oz. An engineer was remote-controlling the robot. We tested in 2 configurations: the first when the robot is static, and the second when the robot can move around. In both configurations, the robot can sing the rhymes and make the gestures of the rhymes, and wait for the child to perform. Our subject is a 4-year-old male who shows typical characteristics of infant autism, featuring alterations of social communication. He also shows limited, repetitive and stereotypical interest and behaviours, conform to the diagnostic criteria DSM V and APA. He shows an important developmental delay in language, can only say a few isolated words, mostly as an echo, and avoids eye contact.

We noticed a clear interest of the subject in the robot and his interest to interact at the beginning of both sessions. However, we noted a clear improvement of the attention in the case the robot can move, and regain attention, whereas in the case of a static robot, the child is easily distracted by other details of the room.

5 Discussion

These tests show interest of subjects in our proposed imitation game. However, we still need to improve our system. First, we need to extend our algorithm to recognise not only poses, but also dynamic movements such as in [7]. Then, we would need to test our pose recognition system on our subjects, and implement not only a control system of the arms and voice for the nursery rhyme but also a control of the position of the robot to be able to attract the attention of the subject. Moreover, our system needs a good interaction system so as to know when to wait for the child to mimic and sing along the rhyme, or to repeat the line of the rhyme to remind the child of the rhyme, or even to continue to the next line to sustain his interest. The implementation of such an autonomous robot would require a difficult understanding of the behaviour of the subject to detect not only if he has repeated the gestures, but also whether the absence of response is due to a lack of time to respond, a bad understanding of the gesture, a lack of attention, or a lack of interest.

[10pc] During the test session with the mobile robot: the subject is making the gesture "Spin the hands" in the imitation game

These first tests support a use of a humanoid robot as a therapeutic tool to improve imitation skills in ASD children. Indeed, imitation constitutes a pivotal skill in children to enable the development of other cognitive and communication skills.

References

  • [1]
  • [2] 2019. (2019). http://doc.aldebaran.com/2-1/naoqi/index.html
  • [3] Sofiane Boucenna, David Cohen, Andrew N. Meltzoff, Philippe Gaussier, and Mohamed Chetouani. 2016. Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars. Scientific Reports 6 (04 02 2016), 19908 EP –. https://doi.org/10.1038/srep19908
  • [4] Zhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2017.

    Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. In

    CVPR.
  • [5] Sachin Chitta, Ioan Sucan, and Steve Cousins. 2012. Moveit![ros topics]. IEEE Robotics & Automation Magazine 19, 1 (2012), 18–19.
  • [6] Kerstin Dautenhahn, Chrystopher L Nehaniv, Michael L Walters, Ben Robins, Hatice Kose-Bagci, N Assif Mirza, and Mike Blow. 2009. KASPAR–a minimally expressive humanoid robot for human–robot interaction research. Applied Bionics and Biomechanics 6, 3-4 (2009), 369–397.
  • [7] Maxime Devanne and Sao Mai Nguyen. 2017. Multi-level Motion Analysis for Physical Exercises Assessment in Kinaesthetic Rehabilitation. In IEEE International Conference on Humanoid Robots (Humanoids).
  • [8] H. Guedjou, S. Boucenna, and M. Chetouani. 2016.

    Posture recognition analysis during human-robot imitation learning. In

    2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). 193–194.
  • [9] Claire AGJ Huijnen, Monique AS Lexis, Rianne Jansens, and Luc P de Witte. 2017. How to Implement Robots in Interventions for Children with Autism? A Co-creation Study Involving People with Autism, Parents and Professionals. Journal of autism and developmental disorders 47, 10 (2017), 3079–3096.
  • [10] Anne-Lise Jouen, Antonio Narzisi, Jean Xavier, Elodie Tilmont, Nicolas Bodeau, Valentina Bono, Nabila Ketem-Premel, Salvatore Anzalone, Koushik Maharatna, Mohamed Chetouani, Filippo Muratori, and David Cohen. 2017. GOLIAH (Gaming Open Library for Intervention in Autism at Home): a 6-month single blind matched controlled exploratory study. Child and Adolescent Psychiatry and Mental Health 11, 1 (22 Mar 2017), 17.
  • [11] Masatoshi Katagiri, Naoko Inada, and Yoko Kamio. 2010. Mirroring effect in 2- and 3-year-olds with autism spectrum disorder. Research in Autism Spectrum Disorders 4, 3 (2010), 474 – 478.
  • [12] Hideki Kozima, Marek P Michalowski, and Cocoro Nakagawa. 2009. Keepon. International Journal of Social Robotics 1, 1 (2009), 3–18.
  • [13] John W. Krakauer and Reza Shadmehr. 2007. Towards a computational neuropsychology of action. In Computational Neuroscience: Theoretical Insights into Brain Function, Paul Cisek, Trevor Drew, and John F. Kalaska (Eds.). Progress in Brain Research, Vol. 165. Elsevier, 383 – 394.
  • [14] Jacqueline Nadel. 2005. L’autisme. Chapter Imitation et austisme.
  • [15] Giovanni Pioggia, ML Sica, Marcello Ferro, Roberta Igliozzi, Filippo Muratori, Arti Ahluwalia, and Danilo De Rossi. 2007. Human-robot interaction in autism: FACE, an android-based social therapy. In RO-MAN 2007-the 16th IEEE international symposium on robot and human interactive communication. IEEE, 605–612.
  • [16] Wakako Sanefuji, Hiroshi Yamashita, and Hidehiro Ohgami. 2009. Shared minds: Effects of a mother’s imitation of her child on the mother–child interaction. Infant Mental Health Journal: Official Publication of The World Association for Infant Mental Health 30, 2 (2009), 145–157.
  • [17] Felippe Sartorato, Leon Przybylowski, and Diana K. Sarko. 2017. Improving therapeutic outcomes in autism spectrum disorders: Enhancing social communication and sensory processing through the use of interactive robots. Journal of Psychiatric Research 90 (2017), 1 – 11.
  • [18] Jean Xavier, Soizic Gauthier, David Cohen, Mohamed Zahoui, Mohamed Chetouani, François Villa, Alain Berthoz, and Salvatore Anzalone. 2018. Interpersonal Synchronization, Motor Coordination, and Control Are Impaired During a Dynamic Imitation Task in Children With Autism Spectrum Disorder. Frontiers in Psychology 9 (Sep 2018).