A Socially Adaptable Framework for Human-Robot Interaction

03/25/2020 ∙ by Ana Tanevska, et al. ∙ 0

In our everyday lives we are accustomed to partake in complex, personalized, adaptive interactions with our peers. For a social robot to be able to recreate this same kind of rich, human-like interaction, it should be aware of our needs and affective states and be capable of continuously adapting its behavior to them. One proposed solution to this problem would involve the robot to learn how to select the behaviors that would maximize the pleasantness of the interaction for its peers, guided by an internal motivation system that would provide autonomy to its decision-making process. We are interested in studying how an adaptive robotic framework of this kind would function and personalize to different users. In addition we explore whether including the element of adaptability and personalization in a cognitive framework will bring any additional richness to the human-robot interaction (HRI), or if it will instead bring uncertainty and unpredictability that would not be accepted by the robot`s human peers. To this end, we designed a socially-adaptive framework for the humanoid robot iCub which allows it to perceive and reuse the affective and interactive signals from the person as input for the adaptation based on internal social motivation. We propose a comparative interaction study with iCub where users act as the robot's caretaker, and iCub's social adaptation is guided by an internal comfort level that varies with the amount of stimuli iCub receives from its caretaker. We investigate and compare how the internal dynamics of the robot would be perceived by people in a condition when the robot does not personalize its interaction, and in a condition where it is adaptive. Finally, we establish the potential benefits that an adaptive framework could bring to the context of having repeated interactions with a humanoid robot.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

page 13

page 14

page 16

page 17

page 18

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

People have a natural predisposition to interact in an adaptive manner with others, by instinctively changing their actions, tones and speech according to the perceived needs of their peers [1][2]. Moreover, we are not only capable of registering the affective and cognitive state of our partners, but over a prolonged period of interaction we also learn which behaviors are the most appropriate and well-suited for each one of them individually [3]. This universal trait that we share regardless of our different personalities is referred to as social adaptation (adaptability). Humans are very often capable of adapting to others, even though our personalities may influence the speed and efficacy of the adaptation. This means that in our everyday lives we are accustomed to partake in complex and personalized interactions with our peers.

Carrying this ability to personalize to HRI is highly desirable since it would provide user-personalized interaction, a crucial element in many HRI scenarios - interactions with older adults [4][5][6], assistive or rehabilitative robotics [7][8][9], child-robot interaction [10][11], collaborative learning [12][13], and many others. For a social robot to be able to recreate this same kind of rich, human-like interaction, it should be aware of our needs and affective states and be capable of continuously adapting its behavior to them [14][15][16][17][18].

Equipping a robot with these functionalities however is not a straightforward task. One potentially robust approach for solving this might consist of implementing a framework for the robot supporting social awareness and adaptation. In other words, the robot would need to be equipped with the basic cognitive functionalities, which would allow it to learn how to select the behaviors maximizing the pleasantness of the interaction for its peers, while being guided by an internal motivation system that would provide autonomy to its decision-making process.

In this direction, the goal of our research was threefold: attempt to design a cognitive architecture supporting social HRI and implement it on a robotic platform; study how an adaptive framework of this kind would function when tested in HRI studies with users; and explore how including the element of adaptability and personalization in a cognitive framework would in reality affect the users - would it bring an additional richness to the human-robot interaction as hypothesized, or would it instead only add uncertainty and unpredictability that would not be accepted by the robot‘s human peers?

In our past works, we have explored adaptation in child-robot interaction (CRI) in the context of switching between different game-based behaviors [19]. The architecture was affect-based [20], and the robot could express three basic emotions (a ”happy”, a ”sad”, and a ”neutral” state) in a simple way. These emotions were affected by the level of engagement the child felt towards the current robot’s behavior. The robot aimed to keep the child entertained for longer by learning how the child reacted to the switch between different game modalities. We have since expanded on the core concept of a robot’s internal state guiding the adaptation, and we advanced from the discrete emotional states and one-dimensional adaptation to a more robust framework. Starting from the work of Hiolle and Cañamero [21][22] on affective adaptability, we have modified our architecture to utilize as motivation the level of comfort of the robot, which is increasing when the robot is interacting with a person, and decreasing when it is left on its own.

The robotic platform selected for our study was the humanoid robot iCub [23], and the scenario for testing the framework’s functionalities was inspired by a typical interaction between a toddler and its caregiver, where the toddlers tend to seek the attention of their caretakers after being alone for a while, but as soon as their social need has been saturated they lose interest and turn their attention to something else [24]. The robot therefore acted as a young child, asking the caretaker’s company or playing on its own and the human partners could establish and maintain the interaction by touching the robot, showing their face and smiling, or showing toys to the robot. This scenario was deemed suitable for studying some fundamental aspects of interaction (such as initiation and withdrawal) with a fully autonomous robot behavior and very limited constraints to the human activities, as well as in a seemingly naturalistic context. Furthermore, we verified these assumptions over the course of several validation and pilot studies [25][26].

In this paper we cover the work we did on developing a cognitive framework for human-robot interaction; we analyze the various challenges encountered during the implementing of the cognitive functionalities and porting the framework on a robotic platform; and finally we present the user studies performed with the iCub robot, focused on understanding how a cognitive framework behaves in a free-form HRI context and if humans can be aware and appreciate the adaptivity of the robot. The rest of the paper is organized as follows: Section 2 presents the adaptive framework for our architecture, followed by Section 3 which presents the experimental methods applied in our study with iCub. Finally, in Sections 4 and 5 we present the findings from our study and we touch on our plans for future work.

2 Architecture

A cognitive agent (be it a natural or an artificial one) should be capable of autonomously predicting the future, by relying on memories of the past, perceptions of the present, and anticipation of both the behavior of the world around it as well as of its own actions [27]. Additionally, the cognitive agent needs to allow for the uncertainty of its predictions and learn by observing what actually happens after an action, and then assimilating that perceptive input into its knowledge about the world, adapting on the way its behavior and manner of doing things.

Following this, cognition can be defined as the process by which an autonomous agent perceives its environment, learns from experience, anticipates the outcome of events, acts to pursue goals and adapts to changing circumstances. Our work focused on developing a cognitive architecture for autonomous behavior, supporting all of these functionalities, for generalized applicability on any robotic platform for HRI.

It is important to note that there are several well-studied cognitive architectures in literature which are designed for a more context-free human-robot interaction, or even in a broader sense for general intelligence, such as the ACT-R/E[28], SOAR[29] and Sigma[30], to name just a couple; however we opted for a simpler approach in order to have more freedom for future expansions of the architecture. Our framework over its various developments was tested on the iCub humanoid robot. The architecture relies on the robot evaluating the affective state of its human peers and their mode of interacting with the robot as factors which determine the robot’s own internal emotional condition, and subsequent choice of behavior.

Starting from this foundation, our framework for the iCub consisted of the following modules and their functionalities:

  • Perception module, processing tactile and visual stimuli;

  • Action module, tasked with moving iCub’s joint groups.;

  • Adaptation module, active only in the adaptive profile for the robot and in charge of regulating iCub’s social need.

2.1 Perception module

The perception module was tasked with processing stimuli from two sensor groups: tactile stimuli - the data processed from the skin sensor patches on the iCub on its arms and torso, which carried information about the size of the area that was touched (expressed in number of taxels - tactile elements) and the average pressure of the touch[31]; and visual stimuli - the images coming from iCub’s eye camera, jointly analyzed for detecting the presence of a face and extracting the facial expression of the person, as well as for detecting the presence of some of iCub’s toys. The module was realized using iCub’s middleware libraries [23]

for processing the data from the skin covers on its torso and arms; as well as using the open-source library OpenFace

[32] for extracting and analyzing the facial features of the caretaker, represented by their facial action units (AUs) [33].

The data from the OpenFace library were analyzed for obtaining the most salient action units from the detected facial features. We considered as possitive-associated AUs smiling and cheek raising with crinkling of the eyes, and as negative-associated AUs brow lowering, nose wrinkling and raising the upper lip in a snarl. Presence of all positive AUs was classified as ”smiling” (presence of just a mouth smile but no positive upper AUs signified a fake smile, and was not classified as ”smiling”), presence of only the brow lowering but without additional negative AUs was classified as ”contemplating” whereas the presence of all negative AUs signified ”frowning”. If neither of these AUs groups were present in the frame, the user’s affective expression was classified as ”neutral”.

Figure 1: The two outputs from the perception module. (Informed consent of participants has been obtained for the use of their photo).

In addition to the affect detection functionality, the visual perception consisted also of the color detection functionality, which was able to detect and track a set of predefined colors, looking for contours in the image of a certain size (fitting the size of the toys) and color. Figure 1

.a shows the simultaneous detection and tracking of the face of the participant and a toy - the center of the face is indicated with a pink circle, the center of the object with a blue one, and the smaller purple circle instead indicates where iCub’s attention is at that moment, i.e. which stimuli is tracking. Figure

1.b instead shows detected touch on the tactile covers of iCub’s torso. For the skin there was some additional processing post-extraction; as during prolonged interaction the tactile sensors tended to overheat and give phantom signals, the data were filtered to register as touch only areas that were larger than 5 taxels and recorded avg. pressure larger than 12.0. These data were processed for the torso and both arms separately, and sent to the perception module.

2.2 Action module

The action module communicated with iCub’s middleware [34] and performed a finite set of actions by controlling the specific body part in the joint space. iCub was holding a box of its toys and it could move its arms in an extending or flexing motion, thus bringing the box closer to the person or away from them. A looking action was performed every time iCub was changing its gaze focus, utilizing motions of the neck and saccadic motions of the eyes. When iCub wanted to engage with the caretaker, it would straighten up and look for the person, and then during the interaction engage in gaze-cueing and looking at objects, whereas when iCub was oversaturated and wanted to disengage, it would lean down to the table and away from the person, and look down to its toys, ignoring other attempts to engage.

2.3 Adaptation module

This module maintained iCub’s comfort and guided the adaptation process. The motivation in our architecture was represented by iCub’s striving to remain in an optimal level of comfort, which was achieved by continuously adapting and changing the parameters of the motivation functionality. The comfort of iCub grew when a person was interacting with it, and the stimuli were weighted accordingly - a multimodal interaction (receiving both visual and tactile stimuli) or a longer, steadier interaction rated higher and increased the comfort faster. Inversely, lack of any stimuli caused the comfort value to decay. iCub’s social architecture was also equipped with a saturation and a critical threshold, which were reached when the interaction was getting too intense or was too sparse, respectively.

At the beginning of the interaction with each user, iCub started with its comfort set at 50% of the maximum value it could have. Then the comfort level was updated continuously at the beginning of each cycle of the control loop of the interaction111Referring here to the perception-action control loop of iCub’s architecture. This happened in the following manner: if (F[t] || T[t]): C[t] = (F[t]+T[t]+C[t-1])/(+0.1) else: C[t] = *C[t-1] where C[t] indicates the current comfort level whereas C[t-1] is the previous comfort level; F[t] and T[t] are the input stimuli from the visual and tactile sensors respectively. and are the social variables dictating the decay and growth rate of the comfort value, where their initial values were set at = 0.998 and = 500.

When there was a human interacting with the robot (iCub was perceiving a face in front of it, or registering touch with its skin), the comfort C[t] at time t was updated using the first formula, which takes into consideration both modalities in which the user could interact with iCub, as well as the previous level of comfort C[t-1]; on the other hand if iCub was not currently engaged in interaction, its comfort was updated as depicted in the second formula, which calculated the decay of the comfort.

The variables and were the growth and decay rates respectively, which were part of the internal variables that iCub could modify in its adaptation process. modulated how much C[t-1] was taken into consideration: a smaller bringing a more rapid growth of the comfort when stimuli were detected, and a larger value a slower, steadier growth. was indicating how quickly C[t] decayed without stimuli; the smaller the value of , the more drastic the decay of the comfort.222 The manner of modifying the and variables was carried over from the related research done in [21][22]. A previous simulation study [26] explored more in depth how the behavior of the architecture could be affected by varying the initial values of these rates, using different steps in the adaptation, and starting with different critical and saturation thresholds.

iCub’s architecture allowed for adaptation on two dimensions - the frequency of interaction initiation and the duration of the interaction. The first one affected , and the adaptation on the second dimension instead modulated . After each instance of iCub adapting on either dimension, it entered a suspension period of 20 seconds where it attempted to recover and during which it was not open to interaction with the users. The adaptation process had the following pattern:

  • If the comfort reached the saturation limit: increase the value of by 500, and during the period of suspension ignore all stimuli. The resulting lack of sensitivity to stimulation leads to a decrease in the comfort value back to the optimal zone.

  • If the comfort dropped to the critical level: increase the value of by 0.005, attempt to engage the caretaker; if ignored enter the suspension period and simulate stimuli to itself so as to recover back to the optimal comfort level.

To give a reference of the framework’s dynamics - the initial values of the comfort variable, , and provided for 1.5 minute of extreme interaction before hitting a threshold (1.5 minute of zero stimuli for a critical threshold, and 1.5 minute of full multimodal interaction for saturation). The time limits increased after each architecture adaptation, e.g. after 2 adaptations prompted by critical triggers, iCub could be left by itself for 7.5 minutes before hitting the threshold again.

Originally the architecture adapted by immediately resetting the comfort level back to the optimal level and continuing with the interaction. The suspension period was included as a factor only after the validation of the original architecture with participants, during which it was realized that a continuation of responsiveness of the robot might not have allowed for the participants to infer that they were doing something not ideal for the robot. For example - in the case of saturation, after the instantaneous robot withdrawal, it was immediately ready again to respond, which induced participants again to continue to interact in the same manner and trigger again saturation.

3 Experimental methods

We had already established in a previous exploratory study that a game-based interaction scenario would not provide the desired amount of affective expresiveness in participants [35]. Since we had also seen the effectiveness of a caretaking scenario in prior pilot and validation studies with the iCub robot [25], we decided to continue in the same direction and expand the existing experimental setup. As before, the interaction scenario placed iCub in the role of a toddler exploring and playing with its toys, while the participants were tasked as the iCub’s caretaker.

3.1 Participants

Twenty-six participants in total took part of the caretaker study. The youngest participant was aged 18 and the eldest 58, with the average age being 32.6 years (SD = 11.98). The gender ratio between the participants was 15:10:1 (M:F:NBGQ333NBGQ - non-binary/genderqueer).

3.2 Experimental setup

Since we had already explored the preference of participants for an adaptive dynamic robotic profile over a static scripted one [25], we now placed the focus on a different task - evaluating in greater detail the effect of the adaptation modality in two otherwise equally dynamic and responsive behavior profiles. In that direction, the two different ”personalities” of iCub were both equipped with the full cognitive architecture described in the previous section, with the only difference being that one profile had the adaptation functionality disabled.

In both behavior profiles iCub’s behavior was guided by its social skills, and in both conditions iCub began the interaction with the optimal values of the growth and decay variables as selected after the simulation study [26]. The only variation in the profiles was that in the fixed profile (F) the values remained unchanged throughout the interaction (regardless of how many times the boundaries were hit), whereas in the adaptive profile (A) instead there was the personalization of the architecture to each participant by modifying the values after each threshold hit.

The interaction between iCub and the participants was mostly free-form; and while iCub could try during the session to also initiate interaction, or would actively ask for it after hitting a critical or saturation point; for the most part participants had the liberty of guiding the interaction. During the entire interaction iCub could receive and process stimuli from the participants which could be tactile (contact with the skin patches on iCub’s arms and torso) and visual (either observing the participant’s face at an interacting distance and evaluaing the facial expressions, or detecting toys by recognizing their color and shape).

In the laboratory iCub was positioned in front of a table (as shown in Figure 2), holding a box with toys, some of which were out of the box and spread across the table at the beginning of the interaction. The participants were offered a chair in front of the table facing iCub, but they also had the freedom to sit or walk anywhere in the room.

Figure 2: The layout of the laboratory setup. Informed consent of participants has been obtained for the use of their photo.

When iCub was in a state of interacting with its caretaker, it maintained mutual gaze and tracked the person’s face, or if the person was playing around with some of the toys it would track the toy that was nearest to it. If the person was not showing any toys to iCub, it would occassionally break mutual gaze and try to indicate toys to the person by looking down at a toy and back to the person (gaze-cueing), by saying the name of the toy or by moving the box towards the participant. In order to avoid giving participants the impression that iCub could understand them, the verbal utterings (which were the names of the colours iCub could recognize, as well as some encouraging and protesting sounds in order to attract attention or to disengage) were recorded in Macedonian and then processed and low-pass filtered so as to both make them sound more robotic as well as unintelligible to participants.

3.3 Secondary task

With the goal of further exploring the potential benefits of having critical and saturation thresholds in the architecture, we devised and approach to manipulate the behavior of the participants by introducing a timed secondary task at a certain point in the interaction. While in the pilot study any threshold hits were due to the behavior of the participants themselves and their way of interacting, there was not a possibility to observe what would the behavior look like if participants suddenly had a secondary task they needed to fulfill but the robot was still asking for their attention.

For this, a task needed to be considered that would involve a cognitive load on the participants, while at the same time being a task that would neither be too time-consuming (like sudoku), nor too attention-demanding or distracting (like a phone call during which participants would be tasked to write down some information). The solution selected was to present participants with some easier mathematical problems involving the basic arithmetic operations, which meant finding a set of numeric puzzles that would be both simple enough to do in a short time interval, but also appealing and interesting. The final choice for the secondary task was the pollinator puzzle444https://mathpickle.com/project/pollinator-puzzles/.

The pollinator puzzle is a logic-based, combinatorial number-placement puzzle, where ten empty fields are arranged in a flower-like shape (see Figure 3). The digits 0-9 need to be placed in the empty fields, each digit appearing one time only without repetitions, in such a way that each pair of digits gives the specified result for the operation on the petals. Each puzzle has only one possible solution following these rules.

Figure 3: Sample pollinator puzzle

3.4 Protocol

Participants were evenly distributed in two groups of 13 people, where one group interacted first in the adaptive and then in the fixed dynamic setting, and the other vice versa. A session of interaction in either profile setting lasted 12 minutes, divided in three intervals of 4 minutes, the middle one of which was the interval when participants were asked to work on the secondary task.

Between the two sessions of interaction, as well as at the beginning and end of the interaction participants answered questionnaires (more details on the questionnaires in the following subsection), bringing the total time of commitment for the participants at around 45-50 mins. There were two environments in which the participants were stationed during their visit - the office setting and the laboratory setting.

Upon their arrival to the institute, participants were first brought to the office setting, where they were presented with the consent form and given time to read through it and sign it. Then, while still in the office, participants were informed that during the experiment there would be several moments during which they would be given different forms of questionnaires - related to their personality, relationship to iCub, as well as creativity and problem solving. This was followed by the familiarization phase for the pollinator puzzle. The concept and rules of the puzzle were explained to the participants, and they were presented with the first pollinator puzzle (the purpose of which was to obtain the baseline for each participant’s performance). The participants were timed for 4 minutes (the amount of time allotted for the puzzle during the familiarization phase was the same as the time during the robot interaction). After the time ran out (or if participants completed the puzzle in less time - after they were done), we escorted the participants from the office and took them to the laboratory.

On the way to the laboratory we briefed participants on the experiment, more specifically they were told that they would have roughly half an hour of free interaction with the robot iCub Reddy, who is equipped with a toddler-like personality. We informed them of the modalities they could interact on with iCub, albeit in an informal way - ”iCub can see you, it555the participants who spoke only Italian were briefed in Italian instead of English. Due to Italian not having a gender-neutral pronoun, iCub was referred to with ”him” in Italian (lui) can feel you when you pet it, it likes hearing you talk to it even though it does not understand you, it speaks its own language”. Participants were purposefully informed that iCub likes hearing them because it was observed in our previous pilot study that people who knew iCub was not capable of speech recognition did not talk at all to the robot during the study.

Additionally participants were reassured that any perceived lack of interest or reciprocity on iCub’s part would due to the robot switching its attention to something else (in line with its toddler personality), and not due to them interacting ”in a wrong way”. This was also deemed necessary to be included in the protocol due to a similar realization from the previous pilot that some people were getting worried when iCub would switch its attention and they thought they ”did something wrong”.

3.5 Data Analysis

The data collected during the study consisted of four main sources - the data collected from the questionnaires filled by the users; the evaluations from the filled pollinator puzzles; the video and audio recordings from the external camera; and the data collected by the robot during the interaction phases from the tactile sensors, internal camera and state machine output.

3.5.1 Questionnaires

Participants responded to questionnaires at three points during the interaction study - the first set of questionnaires was done after they entered the lab with the robot but before beginning with the interaction, the second set was halfway through the interaction (which in reality was the moment after which the robot switched personalities, unbeknownst to the participants), and the last set was at the end of the interaction.666these 3 points in the interaction are labelled as PRE, BETWEEN and POST

All three sets of questionnaires collected the IOS rating of closeness between themselves and the robot [36], as well as the Godspeed questionnaires on animacy and likeability [37]. Additionally, in the second and third set of questionnaires there was also a qualitative question asking participants to describe the interaction using three adjectives, as well as a set of questions related to how they perceived the interaction with the robot. Finally in the third set of questionnaires there were two descriptive questions related to the different sessions, and the TIPI questionnaire.

3.5.2 Pollinator puzzle

Participants did in total three rounds of the pollinator puzzle - one as a baseline before starting their interaction with the robot, one during the first interaction session and one during the second interaction session. There were two evaluation metrics for the puzzles - the % of filled fields (out of the 10 empty fields) and the % of accurately filled fields.


A combination metric was then designed in order to obtain a single evaluation value, where if X was the percentage of completeness and Y the percentage of accuracy, the final metric Z was obtained as Z = 0.4*X + 0.6*Y. The combination metric was designed with the goal of taking into account as factors both the accuracy and the completeness, but give a higher reward for the accuracy.

3.5.3 Internal data from iCub

From the iCub itself we recorded the tactile and visual data, as well as all of the values of the architecture - the fluctuations of the comfort value and the changes to the decay and growth rates. The data from the architecture was annotated for each frame received by the robot with a timestamp and the state (of the state machine) iCub was in.

4 Results

In this study we were interested in exploring whether adaptation is a necessary functionality for human-robot interaction, and particularly for the context of free-form social human-robot interaction? If there was not a clear task for the human to perform with the robot, would the adaptive functionality bring anything additional to the interaction? To address this, three related questions were formulated:

  • How much would the adaptive architecture change for each participant during the interaction, and how would people react to such personalization (answered in subsection 4.1)?

  • What would be the subjective evaluation of the participants for the interaction, and would it depend on the adaptivity of the robot? (answered in subsection 4.2)?

  • Would participants change their way of interaction across modalities or robot adaptivity level? (answered in subsection 4.3)?

4.1 Architecture dynamics

The cognitive framework developed for iCub was a continuously-changing one, learning by way of modifying its social variables and adapting to the person’s frequency and intensity of interaction. This means that interacting with robot provoked changes in the internal states of iCub and its comfort level. Every time a threshold of the robot was hit, iCub adapted the appropriate comfort variable and its behavior changed accordingly.

If the critical threshold was hit, signifying lack of stable interaction with the person, iCub modified its decay rate and as a result could remain in an idle state for longer periods of time before it would need again to interact with the person. On the other hand, hitting the saturation threshold meant iCub was engaged with a person who was more intense in the way it behaved and interacted with iCub (using multiple modalities and interacting for a long stable period of time), so iCub modified its growth rate which enabled it to stay interacting for longer time.

Figure 4 shows the behavior of the architecture and the flow of iCub’s comfort value for two different participants in different sessions of interaction. Figure 4.a illustrates the behavior of the architecture for a participant that had its first interaction with the robot in the Fixed session. Here the critical threshold was hit first two times while the participant was performing the secondary task, and the participant ignored the robot’s attempts to engage; and additional three times in the last phase of the session after the timer for the secondary task ran out, but in these three instances the participant was no longer distracted and answered iCub’s calls.

There are two reasons why the three responded calls are so close in succession one after the other. The first reason is that iCub was in the Fixed personality, so it did not adapt to the person’s reduced interaction during the secondary task. This explains why the first three (out of the five in total) threshold hits happened at an identical regular period. The last two threshold hits instead happen so close to each other since the participant responded unstably to iCub’s calls, in a manner of giving brief stimuli and then turning their attention to something else, which did not provide iCub with enough stability to be comforted. Instead in the final instance when the critical threshold is hit the participant’s response was a more stable one, interacting on several modalities, so as a result iCub’s comfort resumed growing.

Figure 4: Architecture dynamics: upper graphs depict the variations in iCub’s comfort value over the course of an interaction session, lower graphs depict the occurrence of stimuli. Critical hits where participants responded the robot’s call for engagement shown in yellow stars, ignored critical hits shown in red dots. (a) FA participant interacting in F session, 5 points of hitting critical threshold. (b) AF participant interacting in A session, 3 points of hitting critical threshold.

Figure 4.b instead shows the interaction between iCub and another participant interacting with it again for the first time, but in the Adaptive session. This participant was a less interactive one in comparison to the participant in Figure 4.a, but even so the total number of threshold hits was three, out of which only one was not answered. This demonstrates the effectiveness of the adaptivity of the architecture, which can be observed also in the decay slope during the secondary task. After two adaptations of the architecture the decay slope is a much slower one, allowing for iCub not to hit another critical point until very near the end of the interaction.

Independently of the order of the interaction sessions (AF or FA) or the phase of interaction, overall during the experiment on average people hit a threshold 1.42 times during one session, 1.79 times on average during the first session and 1.04 during the second one.

The absolute number of threshold hits summed for all participants was 68, out of which only 2 (3%) were saturation hits, and all remaining ones (97%) were critical. In these calculations the first two participants were excluded due to technical reasons rendering their number of threshold hits unusable.

Figure 5 illustrates the effect of the order of the sessions on people’s first interaction with iCub. Overall, the participants in the FA group had noticeably more threshold hits in the Fixed session than in the Adaptive, whereas the participants in the AF group had a roughly similar ratio of total threshold hits in the Fixed and Adaptive sessions.

Figure 5: Comparison of average amount of threshold hits per session and order group

This was additionally confirmed after running a mixed-model 2-factor ANOVA, with SESSION (levels: adaptive and fixed) and ORDER (levels: AF,FA; signifying the groups of participants) as the within and between factors respectively. A difference has been considered significant for p 0.05.

A significant difference was found both over the SESSIONS (F(1,22) = 7.87, p = 0.01) and for the interaction between the two sessions for the FA group (F(1,22) = 5.27, p = 0.03), confirmed with a Bonferroni test.

A deeper analysis into the individual modes of behavior are presented in Figure 6. This analysis consisted of measuring the changes in the architecture for each participant, comparing for the two different orders of sessions how many times the thresholds of the architecture were hit, as well as how many times people responded to the calls for interaction in critical.

Figure 6: Number of occurred and responded threshold hits per session for FA and AF participants

While a large variety in the number of threshold hits (ranging from 0 to 6) can be seen in both conditions and across both session orders, it can be noticed that the majority of people showed a tendency to respond to the robot’s calls. There were some participants that never hit a critical or saturation threshold (indicated at the end of both figures), however there were only two participants which did not respond to the robot’s calls for engagement, suggesting that in addition to iCub being adaptive in some cases, participants adapted always to the robot.

The analysis of the architecture dynamics highlighted the difference in which session was the starting session for participants as shown in 5: FA participants had a more challenging first session since it was both the first session of interaction with the robot, and the session where the architecture did not adapt to their interaction particularities. On the other hand, the AF participants’ first session of interaction with the robot was the one where iCub was adapting its comfort variables to their interaction profiles, which contributed to them having less threshold hits in their Fixed session when compared to their FA counterparts.

4.2 Subjective evaluation

The subjective evaluation included exploring the explicitly-expressed preference of participants for interacting with iCub in the A or F session, their ability to differentiate between the two different profiles of the robot, and evaluating whether their IOS/GS changed as a function of the time spent with the robot or the adaptivity of the robot.

Figure 7: Participants in interaction with iCub. (a) Participant working on the pollinator puzzle. (b) Participant interacting with iCub . Informed consent of participants has been obtained for the use of their photo.

In our work we wanted to explore the comparison between two similarly dynamic and responsive architectures, where the only difference between them was the inclusion of the adaptive component.

We were curious to investigate the effect on the adaptivity level of iCub to the participants‘ self-rated feelings of closeness with the robot (the IOS rating) and the participants‘ evaluation of the robot’s animacy and likeability (the Godspeed ratings). Figures 8 and 9 show the ratio of the participants’ IOS and Godspeed evaluations before interacting, between the two interaction sessions, and at the end of interaction.

Statistical analysis was performed on all IOS and Godspeed ratings. To better assess the noted difference between the ratings, a mixed-model 2-factor ANOVA was run, with PHASE (levels: pre, between and post; signifying the rating pre-experiment,between-sessions and post-experiment) and ORDER (levels: AF,FA; signifying the groups of participants) as the within and between factors respectively. The ANOVA results follow below:

  • The IOS rating of closeness increased significantly over the PHASE (F(2,48) = 19.88, p 0.001), while the factor ORDER (F(1,24) = 0.128, p = 0.723) was not significant, nor the interaction (F(2,48) = 0.46, p = 0.636); running a Bonferroni test found significant difference between 1st and 2nd phase and 1st and 3rd phase, but no statistically significant increase between 2nd and 3rd phase;

  • The Godspeed rating of Animacy increased significantly over the PHASE (F(2,48) = 5.65, p = 0.006), while the factor ORDER (F(1,24) = 0.798, p = 0.38) was not significant, nor the interaction (F(2,48) = 0.03, p = 0.967); running a Bonferroni test found significant difference only between 1st and 3rd phase;

  • The Godspeed rating of Likeability increased significantly over the PHASE (F(2,48) = 6.28, p = 0.003), while the factor ORDER (F(1,24) = 2.642, p = 0.117) was not significant, nor the interaction (F(2,48) = 0.33, p = 0.72); running a Bonferroni test found significant difference only between the 1st and 3rd phase.

From this analysis what we observed was that participants’ rating of their perceived closeness with iCub changed significantly as a result of them spending more time in interaction with it, and not as a function of the adaptivity of the robot, which could signify that on their part, participants did not perceive any structural difference between the two sessions.

Figure 8: IOS ratings distributions across order and phases
Figure 9: Godspeed ratings distributions across order and phases

This can depend on the fact that the two sessions were not particularly different to the participants who did not exploit the adaptivity of iCub excessively. Alternately, notwithstanding the differences experienced by the participants, both sessions could have been equally ”likable” to them.

While people seemed more consistent in their Godspeed ratings across all sessions, their IOS ratings tended to be more variable, with bigger differences (usually of 1, but also reaching 2 and 3) between the different sessions. However also here the same conclusion was evident - the rating of IOS closeness increased for most people as a consequence of the prolonged time spent with the robot, and not as a result of the adaptability. It would seem that although there were differences in the two sessions, people did not change their rating.

This was confirmed also by the free questions they had to answer after the second session:

- Which session did you prefer and why?

- What was the difference (if any) you noticed between the two sessions of interaction?

77% of participants answered that they preferred the second session because they felt iCub was more animated or interactive towards them, 19% replied that they enjoyed both sessions equally and only one participant said he did not enjoy any of the two. Additionally, 27 % answered that they did not perceive any difference between the sessions, 46% instead had perceived the robot being more interactive in the second session (however from those 46% half were FA and half AF, signifying random chance), and 23% said they learned how to interact better in the second session.

4.3 Behavioural evaluation

After analyzing the subjective evaluation, the final step was processing the behavioral results, which measured how the interaction between iCub and the participants actually unfolded. The behavioral evaluation of the participants analyzed if people actually interacted differently with the robot across different phases and different modalities. This was considered again as a function of the time spent with the robot or the session order. An additional analysis was done on into how the participants’ behavior changed during the dual task.

This section covers the results from the different modalities of interaction - i.e. how people interacted with iCub on the three modalities of visual-face, visual-objects and tactile; the distribution of iCub’s states during the interaction and all three phases for each session, and finally how the secondary task impacted the interaction. Figure 10 shows the distribution of the states the robot was in during the interaction.

During the interaction sessions, iCub’s behaviour was guided by a state machine. The three main states were idle, when the robot was left without stimuli from the user and interacted by itself; interact when engagement had happened by either party; and suspend which iCub entered after hitting a threshold hit and its call for engagement was not responded to (for critical hits). There were also more minor, transitional states signaling a change in behavior or an occurrence of the architecture adapting. However since these lasted only a few frames (and in real interaction time, less than three seconds), they were not taken into account.

From these results several conclusions can be obtained:

  • Participants that interacted for the first time with iCub in the Fixed condition spent less time in interaction in the very first phase of the first session when compared to the participants who had their first interaction in the Adaptive session. This effect is not present in the second session as well due to the loss of the novelty effect, since both groups of people had already interacted with the robot;

  • In the third phase (the interaction after the secondary task) there seems to be compensation for having previously ignored the robot, in the form of increased interaction. This can be seen especially in the Fixed session (regardless of order group), potentially because the robot asked for more attention without adapting to the users ignoring it;

  • The distribution pattern of the states in the last phase of the first session carry over to the first phase of the second session, indicating a training, or learning how to interact. This effect was particularly not obvious and expected for the participants of the AF group, since in their second session of interaction the architecture values of the robot were reset, so the AF participants essentially interacted first with a robot that adapted to them, and then with one that lost its adapted specifics;

  • The interactive behavior during the dual task changes significantly for the FA participants between the two sessions. Having ignored the robot during the secondary task in the first session (F) where it did not adapt to them, they seem to overcompensate in the secondary task in the second session (A) and there is a huge jump in interactivity. This may be a combined effect of both overcompensation combined with the added adaptivity of the robot;

  • The interactive behavior during the dual task stays nearly identical for the AF participants between the two sessions. Having the robot adaptive in the first session (A) it adjusted to them and it spends significantly more time in interaction than in the first session of FA participants, however due to them not perceiving the robot as particularly annoying or demanding for attention in their first session, there is not the compensation in the second session.

Figure 10: Distribution of the major states during the interaction, shown across phases and sessions

To evaluate the difference between the interaction patterns in the different phases, a 3-factor mixed-model ANOVA was run, with SESSION (levels: adaptive or fixed) and PHASE (levels: 1, 2 and 3; signifying the 1st, 2nd and 3rd phase of an interaction session) as the two within factors, and ORDER (levels: AF, FA) as the between factor. The percentage of time the robot spent in the interaction phase varied significantly both over the SESSION (F(1,22) = 6.08, p = 0.02) and PHASE (F(2,44) = 20.14, p 0.001), while the factor ORDER and the INTERACTION were not significant. The Bonferroni test showed significant differences between the 1st and 2nd, and the 2nd and 3rd phases, but no significant difference between 1st and 3rd phase.

Since there was a noticeable difference in how people behaved with the robot while they were tasked with the pollinator puzzle, the next analysis focused on looking into the score of the pollinator puzzle. From Figure 11 showing the averaged pollinator scores for both groups (AF and FA) over the three times they filled the puzzle - baseline, first session, second session) it can be noticed that there is not a significant difference over the average score, signifying that even on the phases when the robot was non adaptive, on average participants could complete the task to some extent.

Figure 11: Average pollinator scores for the three times participants did the puzzle

With this analysis it was established that the behavior of people during the secondary task (interacting with the robot or ignoring it in order to focus on the task) did not strongly impact their pollinator score. In other words, how good people were at the task was something subjective for each person themselves, and did not depend on whether they interacted a lot with the robot or ignored it. The last step of the analysis looked into the modalities participants used when interacting with iCub.

What can be observed from the modalities graphs shown in Figure 12 is that during the secondary task there is understandably the biggest drop in face as input, but compensation with touch, which stays similar and does not have such a significant drop. The patterns in the last phase of session 1 tend to be nearly identical to the first phase of session 2, the reason behind which can be that the mode of interaction carries over between the two sessions. a similar pattern can be also observed in the analysis of the states distribution. To evaluate the difference between the interaction patterns in the different phases, three 3-factor mixed-model ANOVA were run, with SESSION (levels: adaptive or fixed) and PHASE (levels: 1, 2 and 3; signifying the 1st, 2nd and 3rd phase of an interaction session) as the two within factors, and ORDER (levels: AF, FA) as the between factor. A difference has been considered significant for p 0.05.

  • Touch: The percentage of time the robot spent in the interaction phase varied significantly only over the interaction (session*phase*order) (F(2,44) =6.22, p = 0.004), while SESSION (F(1,22) = 1.92, p = 0.18) and PHASE (F(2,44) = 0.69, p = 0.5), were not significant, neither was ORDER. The Bonferroni test showed significant differences for the FA participants in phase 2 between session 1 and 2 (meaning between the two sessions for the FA group during the secondary task);

  • Objects: The percentage of time the robot spent in the interaction phase varied significantly PHASE (F(2,44) = 24.46, p 0.001), while both the SESSION (F(1,22) =0.08, p = 0.78) and the interaction (F(2,44) = 0.52, p = 0.93) were not significant, and neither was ORDER. The Bonferroni test for phase showed difference between the 1st and 2nd phase, and the 2nd and 3rd phase;

  • Face: The percentage of time the robot spent in the interaction phase varied significantly both over the SESSION (F(1,22) = 5.3, p = 0.03) and PHASE (F(2,44) =46.28, p 0.001), while the interaction (F(2,44) = 0.28, p = 0.76) was not significant, and neither was ORDER. The Bonferroni test for phase showed difference between the 1st and 2nd phase, and the 2nd and 3rd phase.

Figure 12: Distribution of the perceived stimuli in different modalities during the interaction, shown across phases and sessions

Even though the subjective evaluation of the participants did not express a correlation between the adaptiveness of the robot with its likability, nor an awareness of the participants for there existing a difference in the profiles at all, there were implicit results pointing to the opposite. The manner of interacting with the robot, both in terms of frequency and of used modalities, changed noticeably, particularly when participants were tasked with the secondary task. More precisely, when the robot was in its adaptive profile, even if the people were given another task to complete, they still managed to interact with the robot in parallel.

5 Discussion

Different individuals have different inclinations to interact with others, which can be seen also in their approach to interaction with robots. At the same time, different tasks might require different level of human intervention (or robot request for help). Creating a unique robot behavior (or personality) able to fit with task constraints and at the same time with individual desires is an impossible challenge. Endowing the robot with a possibility to adapt to its partners’ preferences is therefore important to grant a certain degree of compliance with individual inclinations.

Our study wanted to tackle this issue by developing a personalized adaptive robot architecture. This architecture enabled the robot to adjust its behavior to suit different interaction profiles, using its internal motivation which guided the robot to engage and disengage from interaction accordingly, while also taking in account the behavior of the person interacting with it.

The caretaker study brought to light two different and opposing, but valuable findings. Participants were not consciously, or at least on an affective level, aware of experiencing two different robotic profiles. When asked explicitly for a difference between the two sessions of interactions, the majority of participants did not report one, or they reported their feeling that the second session had the more interactive robot profile. This however was strongly influenced by nearly all participants having reported they preferred the second session of interaction, signifying that it was not the profile of the robot that influenced their feeling, but rather the gained knowledge on how to better interact with it and the prolonged time spent in interaction. However their manner of interacting with the robot showed noticeable changes depending on the phase and session they were in, as well as depending on the robot behavior during the secondary task.

This has several implications, especially when designing different HRI scenarios. While this study addressed free-form interaction and how an adaptive robot would personalize to its caretaker; if imagining to port this architecture to an HRI study when the robot would need to learn by processing informations from visual or tactile stimuli, the implications from this study’s findings show that the robot would be still capable to receive and process the necessary information from the person, even if the person would not be highly responsive or present at all times.

Additionally, the element of adaptability and personalization in the cognitive framework was not shown to bring any uncertainty and unpredictability. While on a conscious level they remained unaware, the adaptability of the robot still impacted the efficacy of the participants’ interaction. Moreover, the presence of the critical and saturation thresholds promise an another level of richness that could be added to the interaction.

A robot that has a critical boundary can actively try to initiate interaction with the person, which could be useful not only in scenarios where a person might lose track of the robot or get distracted, but also in scenarios where a person might be very interested to interact with the robot but their shyness would prevent them from attempting to engage the robot first.

Complementary to that, a saturation boundary is not only useful for evaluating how much a person is interested in restarting an interrupted interaction, but can be also a crucial element in multi-person HRI scenarios, or if the robot needs to also accomplish some other task in addition to interacting with the people. The saturation threshold in particular was something that did not get used in its full potential in our study, which is probably due to the above-mentioned effects not carrying over to an 1-on-1 HRI scenario. A limitation of our study can be found in that even though the interaction was designed to be as most free-form as possible, it was still a very simplified scenario of interaction. This was also due to the limitations of current state-of-the-art: artificial cognitive agents (such as robots) are not yet at the level of replicating the human cognitive abilities, and the aspect where this was felt the most was in the absence of a verbal interaction.


However, adaptivity is a very important building block of cognitive interaction, and in that way endowing with it a humanoid robot like iCub, even in a scenario with behavior of lower cognitive intelligence, is still already a first step towards approaching personalized and cognitive human-robot interaction. Indeed, this effect can be seen even in children - we observe their limited capabilities (e.g. before 2 years of age they are not speech-proficient), but still they are cognitive agents that are very efficient at establishing adaptive interaction as a function of their partner, be it a peer or a caregiver. The hope and future direction of this research is that by investigating other cognitive functionalities to implement and other scenarios of interaction, the adaptive framework will reach the point of a more individualized, long-term, generalized interaction between humans and robots.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Ethics Statement

Participants signed an informed consent form approved by the ethical committee of Liguria region (Comitato Etico Regione Liguria-Sezione 1), informing them that their performance could be recorded using cameras and microphones, as well as requesting their consent for the usage of the data for scientific purposes. All but three participants received a compensation of 10 euros and all followed the same experimental procedure.

Author Contributions

All authors contributed to the design of the experiment. AT cured the data collection. AS and AT cured the data analysis. All authors contributed to the writing and revision of the manuscript.

Funding

This work has been supported by a Starting Grant from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme. G.A. No 804388, wHiSPER

Acknowledgment

The authors would like to thank those who participated in the study, as well as Matthew Lewis and Imran Khan from the University of Hertfordshire for their help and availability.

References

  • Lindblom [1990] Lindblom B (1990) Explaining phonetic variation: A sketch of the h&h theory. In: Speech production and speech modelling, Springer, pp 403–439
  • Savidis and Stephanidis [2009] Savidis A, Stephanidis C (2009) Unified design for user interface adaptation.
  • Mehrabian and Epstein [1972] Mehrabian A, Epstein N (1972) A measure of emotional empathy 1. Journal of personality 40(4):525–543
  • Kidd et al [2006] Kidd CD, Taggart W, Turkle S (2006) A sociable robot to encourage social interaction among the elderly. In: Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006., IEEE, pp 3972–3976
  • Broadbent et al [2011]

    Broadbent E, Jayawardena C, Kerse N, Stafford RQ, MacDonald BA (2011) Human-robot interaction research to improve quality of life in elder care—an approach and issues. In: Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence

  • Sharkey [2014] Sharkey A (2014) Robots and human dignity: a consideration of the effects of robot care on the dignity of older people. Ethics and Information Technology 16(1):63–75
  • Wood et al [2017] Wood LJ, Zaraki A, Walters ML, Novanda O, Robins B, Dautenhahn K (2017) The iterative development of the humanoid robot kaspar: An assistive robot for children with autism. In: International Conference on Social Robotics, Springer, pp 53–63
  • Plaisant et al [2000] Plaisant C, Druin A, Lathan C, Dakhane K, Edwards K, Vice JM, Montemayor J (2000) A storytelling robot for pediatric rehabilitation. In: Proceedings of the fourth international ACM conference on Assistive technologies, ACM, pp 50–55
  • Admoni and Scassellati [2014] Admoni H, Scassellati B (2014) Data-driven model of nonverbal behavior for socially assistive human-robot interactions. In: Proceedings of the 16th International Conference on Multimodal Interaction, ACM, pp 196–199
  • Paiva et al [2014] Paiva A, Leite I, Ribeiro T (2014) Emotion modeling for social robots. The Oxford handbook of affective computing pp 296–308
  • Tanaka and Matsuzoe [2012] Tanaka F, Matsuzoe S (2012) Children teach a care-receiving robot to promote their learning: Field experiments in a classroom for vocabulary learning. Journal of Human-Robot Interaction 1(1)
  • Ramachandran et al [2016] Ramachandran A, Litoiu A, Scassellati B (2016) Shaping productive help-seeking behavior during robot-child tutoring interactions. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, IEEE Press, pp 247–254
  • Jimenez et al [2015] Jimenez F, Yoshikawa T, Furuhashi T, Kanoh M (2015) An emotional expression model for educational-support robots. Journal of Artificial Intelligence and Soft Computing Research 5(1):51–57
  • Ahmad et al [2019] Ahmad MI, Mubin O, Shahid S, Orlando J (2019) Robot’s adaptive emotional feedback sustains children’s social engagement and promotes their vocabulary learning: a long-term child–robot interaction study. Adaptive Behavior 27(4):243–266
  • Vaufreydaz et al [2016] Vaufreydaz D, Johal W, Combe C (2016) Starting engagement detection towards a companion robot using multimodal features. Robotics and Autonomous Systems 75:4–16
  • Breazeal and Scassellati [1999] Breazeal C, Scassellati B (1999) How to build robots that make friends and influence people. In: 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, vol 2, pp 858–863
  • Cañamero et al [2006] Cañamero L, Blanchard AJ, Nadel J (2006) Attachment bonds for human-like robots. International Journal of Humanoid Robotics 3(03):301–320
  • Kishi et al [2014] Kishi T, Endo N, Nozawa T, Otani T, Cosentino S, Zecca M, Hashimoto K, Takanishi A (2014) Bipedal humanoid robot that makes humans laugh with use of the method of comedy and affects their psychological state actively. In: Robotics and Automation (ICRA), 2014 IEEE International Conference on, IEEE, pp 1965–1970
  • Tanevska [2016] Tanevska A (2016) Evaluation with emotions in a self-learning robot for interaction with children. Master’s thesis, FCSE, Skopje, Macedonia
  • Tanevska et al [2018] Tanevska A, Rea F, Sandini G, Sciutti A (2018) Designing an affective cognitive architecture for human-humanoid interaction. In: 2018 ACM/IEEE International Conference on Human-Robot Interaction, ACM, pp 253–254
  • Hiolle et al [2012] Hiolle A, Cañamero L, Davila-Ross M, Bard KA (2012) Eliciting caregiving behavior in dyadic human-robot attachment-like interactions. ACM Transactions on Interactive Intelligent Systems (TiiS) 2(1):3
  • Hiolle et al [2014] Hiolle A, Lewis M, Cañamero L (2014) Arousal regulation and affective adaptation to human responsiveness by a robot that explores and learns a novel environment. Frontiers in neurorobotics 8:17
  • Metta et al [2008] Metta G, Sandini G, Vernon D, Natale L, Nori F (2008) The icub humanoid robot: an open platform for research in embodied cognition. In: 8th workshop on performance metrics for intelligent systems, ACM, pp 50–56
  • Feldman [2003] Feldman R (2003) Infant–mother and infant–father synchrony: The coregulation of positive arousal. Infant Mental Health Journal: Official Publication of The World Association for Infant Mental Health 24(1):1–23
  • Tanevska et al [in press] Tanevska A, Rea F, Sandini G, Cañamero L, Sciutti A (in press) A cognitive architecture for socially adaptable robots. In: 2019 9th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob)
  • Tanevska et al [2019] Tanevska A, Rea F, Sandini G, Cañamero L, Sciutti A (2019) Eager to learn vs. quick to complain? how a socially adaptive robot architecture performs with different robot personalities. In: IEEE SMC’19 special session on Adaptation and Personalization in Human-Robot Interaction
  • Vernon [2014] Vernon D (2014) Artificial cognitive systems: A primer. MIT Press
  • Trafton et al [2013] Trafton JG, Hiatt LM, Harrison AM, Tamborello II FP, Khemlani SS, Schultz AC (2013) Act-r/e: An embodied cognitive architecture for human-robot interaction. Journal of Human-Robot Interaction 2(1):30–55
  • Laird et al [1987] Laird JE, Newell A, Rosenbloom PS (1987) Soar: An architecture for general intelligence. Artificial intelligence 33(1):1–64
  • Rosenbloom et al [2016] Rosenbloom PS, Demski A, Ustun V (2016) The sigma cognitive architecture and system: Towards functionally elegant grand unification. Journal of Artificial General Intelligence 7(1):1–103
  • Cannata et al [2008] Cannata G, Maggiali M, Metta G, Sandini G (2008) An embedded artificial skin for humanoid robots. In: 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, IEEE, pp 434–438
  • Baltrušaitis et al [2016]

    Baltrušaitis T, Robinson P, Morency LP (2016) Openface: an open source facial behavior analysis toolkit. In: Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, IEEE, pp 1–10

  • Ekman et al [1978] Ekman P, Friesen WV, Hager JC (1978) Facial action coding system (facs). A technique for the measurement of facial action Consulting, Palo Alto 22
  • Parmiggiani et al [2012] Parmiggiani A, Maggiali M, Natale L, Nori F, Schmitz A, Tsagarakis N, Victor JS, Becchi F, Sandini G, Metta G (2012) The design of the icub humanoid robot. International journal of humanoid robotics 9(04):1250,027
  • Tanevska et al [2018] Tanevska A, Rea F, Sandini G, Sciutti A (2018) Are adults sufficiently emotionally expressive to engage in adaptive interaction with an affective robot? In: 2018 Social cognition in humans and robots, socSMCs-EUCognition workshop, FET Proactive H2020 project ”Socializing Sensorimotor Contingencies - soSMCs”
  • Aron et al [1992] Aron A, Aron EN, Smollan D (1992) Inclusion of other in the self scale and the structure of interpersonal closeness. Journal of personality and social psychology 63(4):596
  • Bartneck et al [2009] Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics 1(1):71–81