Robots have progressively migrated from purely industrial environments to more social settings where they interact with humans in quotidian activities such as education [Brown2013], companionship [Belpaeme2013, Hoffman2013], or health care and therapy [Cabibihan2013, Kozima2009]. In these scenarios, on top of performing tasks related to the specific application, there may be a need for the robots to effectively interact with people in an entertaining, engaging, or anthropomorphic manner [Breazeal2003].
The need for enticing interactions between social robots and humans becomes especially pronounced in artistic applications. Robots have been progressively intertwined with different forms of artistic expression, where they are used, among others, to interactively create music [Hoffman2010], dance [Bi2018, LaViers2018, Nakazawa2002, Shinozaki2008], act in plays [Lee2014, Perkowski2005, Sunardi2018], support performances [Ackerman2014], or be the object of art exhibits by themselves [Dean2008]. As in the traditional expressions of these performing arts, where human artists instill expressive and emotional content [Camurri2004, Juslin2005], robots are required to convey artistic expression and emotion through their actions.
While expressive interactions have been extensively studied in the context of performing arts, the focus has been primarily on anthropomorphic robots, especially humanoids [Lee2014, Or2009, Perkowski2013]
. However, for faceless robots or robots with limited degrees of freedom for which mimicking human movement is not an option, creating expressive behaviors can pose increased difficulty[Bretan2015, Hoffman2008, Schoellig2014]. We are interested in exploring the expressive capabilities of a swarm of miniature mobile robots, as opposed to robots with some kind of anthropomorphism, for which there is already a preconceived understanding of emotive expressiveness. This choice is driven in part by the increased prevalence of multi-robot applications and the envisioned, resulting large-scale human-robot teams [Goodrich2007HRIsurvey, Kolling2016, Sheridan2016]; and in part by the expressive possibilities of the swarm as a collective in contrast to the robots as individuals. While using teams of mobile robots to create artistic effects in performances is not something new [Ackerman2014, Alonso-Mora2014], our aim is to provide a framework to use these types of robotic teams in performances without the need for a choreographer to specify the parameters of the robots’ movements, as in [Schoellig2014].
|Emotion||Shape Features||Movement Features||Size|
|Happiness||roundness, curvilinearity [Collier1996]||smoothness [Lee2007]||big [DeRooij2013]|
|Surprise||roundness [Collier1996]||very big [DeRooij2013]|
|Sadness||roundness [Collier1996]||small, slow [Pollick2001, Rime1985]||small [DeRooij2013]|
|Anger||large, fast, angular [Pollick2001]|
|Fear||downward pointing triangles [Aronoff2006]||small, slow [Pollick2001, Rime1985]|
Social psychology has extensively studied which motion and shape descriptors are associated with different fundamental emotions, e.g. [Collier1996, Lee2007, Pollick2001, Rime1985, Ekman1993]. In this paper, we study how such attributes can be incorporated into the movements of a swarm of mobile robots to represent different emotions. The paper is organized as follows: In Section II, we outline the motion and shape characteristics psychologically linked to the different fundamental emotions. The behaviors included in the user study, implemented on the swarm according to the features described in the social psychology literature, are characterized in Section III. The procedure and results of the study conducted with human subjects are presented in Section IV, along with the discussion. Section V concludes the paper.
Ii Emotionally Expressive Movement
For robotic swarms to participate in artistic expositions and effectively convey emotional content, the swarm’s behavior when depicting a particular emotion should be recognizable by the audience, thus producing the effect intended by the artist. However, the lack of anthropomorphism in a robotic swarm can pose a challenge when creating expressive motions for human spectators. In this section, we present a summary of motion and shape features that have been linked to different emotions in the social psychology literature, which will serve as inspiration to create expressive behaviors for swarms of mobile robots.
In this study, we focus on the so-called fundamental emotions [Ekman1993, Izard2009]—i.e. happiness, sadness, anger, fear, surprise and disgust—to produce a tractable set of emotion behaviors to be executed by the robotic swarm. An emotion is considered fundamental or basic if it is inherent to human mentality and adaptive behavior, and remains recognizable across cultures [Izard1977]. In addition, fundamental emotions provide a basis for a wider range of human emotions, which appear at the intersection of the basic emotions with varying intensities [Plutchik2001].
The robotic system considered for this study is a swarm of miniature differential-drive robots, the GRITSBots [Pickem15]. As shown in Fig. 1, the GRITSBots are faceless robots that do not possess any anthropomorphic features. For this reason, we draw inspiration from abstract shape and motion descriptors associated with different fundamental emotions [DeRooij2013] to create different swarm behaviors. To this end, Table I presents a summary of shape, movement and size attributes of abstract objects associated with some of the fundamental emotions, as compiled in different studies [Collier1996, Lee2007, DeRooij2013, Pollick2001, Rime1985, Aronoff2006].
While the summary in Table I provides a good starting point for generating swarm behaviors for most fundamental emotions, motion characterizations of disgust remain scarce the literature. In order to get some intuition about which traits the swarm behavior should portray when embodying this emotion, we direct our attention towards characterizations associated with emotion valence111In this context, the term valence is used to designate the intrinsic attractiveness (positive valence) or aversiveness (negative valence) of an event, object, or situation [Frijda1986] . The valence of an emotion thus characterizes its positive or negative connotation. Among the fundamental emotions, happiness and surprise have positive valence, while the remaining four—sadness, fear, disgust and anger—are classified under negative valence
. The valence of an emotion thus characterizes its positive or negative connotation. Among the fundamental emotions, happiness and surprise have positive valence, while the remaining four—sadness, fear, disgust and anger—are classified under negative valence[Russell1980].. The shape and motion characterizations of positive and negative emotion valences in Table II serve as a basis to design the swarm behavior associated with disgust.
|Valence||Shape Features||Movement Features|
|Positive||roundness||rounded movement trace|
|Negative||angularity||angular movement trace|
The behavior of a robotic swarm depends on how the interactions are established between members of the swarm and what control commands are executed by the individuals based on the information exchanged in those interactions, as illustrated in Fig. 2. While the GRITSBots as individuals cannot change their shape, the collective behavior of the swarm may embody the shape and size attributes included in Tables I and II. On the other hand, the movement features in Tables I and II can be depicted through the movement trace—interpreted as the trajectory taken by the robot over time—that each individual robot executes as it progresses towards the collective shape. In the next section, we describe how all these attributes are implemented in the controller of the robots to produce the behaviors that embody the different fundamental emotions.
Iii Swarm Behavior Design
For our swarm of robots to be expressive, we need to decide which interactions a robot should establish with the robots in its vicinity and its environment, and which control law the robot should execute with the information obtained through those interactions to produce an appropriate swarm behavior. In this paper, we draw inspiration from standard algorithms for multi-robot teams, namely cyclic pursuit [Justh2003, Marshall2004, Ramirez2009] and coverage control [Cortes04, DiazMercado2015], to design the interactions and the control laws for the swarm. This section describes how the shape and movement features described in Section II are incorporated into the control laws of a swarm of 15 GRITSBots in order to create expressive behaviors.
Iii-a Collective Behavior
The attributes presented in Section II characterize how the motion and shape of an abstract object can convey emotion. Here we treat the GRITSBots as objects capable of reconfiguring themselves on a stage in order to generate an expressive behavior.
Among the attributes presented in Tables I and II, it seems natural for those related to shape and size to be depicted by the collective behavior of the swarm, given that the individual robots can move within the planar environment but cannot change their individual shape. To this end, the feature of roundness is incorporated into the behaviors of happiness, surprise and sadness. Those behaviors are thus based on the robots following some kind of circular contour, as illustrated in Figs. 3, 4 and 5, respectively. In the case of the happiness behavior, a sinusoid is superimposed to the base shape of a circle, producing ripples on the circle contour to embody the curvilinearity feature; and the corresponding size attribute—big—is incorporated through the circle dimensions with respect to the domain. As for the surprise emotion, the very big size attribute was included in the behavior by making the radius of the circle grow with time, thus producing a sensation of increasing size. Finally, the circular path dimension was reduced (small attribute) in the case of the sadness behavior, incorporating also the slowness attribute by making the robots follow the contour at a very low speed.
The scarcity of shape characterizations for the other three emotions—fear, disgust and anger—motivates a different approach for the design of the collective behavior of the swarm. For these emotions, we choose to specify which areas of the domain the robots should concentrate around. We do so by defining a density function, , that characterizes the areas of the domain where we want the robots to group. In all three behaviors, the robots are initially distributed at random positions within the domain to then spread according to the particular density function selected. In the case of fear, the density function is uniform across the domain, so that it makes the robots scatter as far as possible from their neighbors, as shown in Fig. 6. For the disgust motion, Fig. 7, the density is chosen to be high around the boundaries, making the robots move from the center towards the exterior of the domain—the stage—, giving the sensation of animosity between robots. Finally, in order to show anger, the robots are made to stay closer to the center of the domain. This strategy, combined with the individual robot control that will be explained in Section III-B, is intended to give the sensation of a heated environment, a riot.
The control laws needed to achieve these behaviors are explained in detail in Appendix A. In each of those laws, a robot in the swarm is treated as a point that can move omnidirectionally. However, the GRITSBots (see Fig. 1) are differential drive robots and, thus, are unable to move perpendicularly to the direction of their wheels. This movement restriction is used to our advantage in the individual control strategies described in Section III-B, where we exploit the limitations on the planar movement of the differential drive robots to implement the movement features in Tables I and II.
Iii-B Individual Robot Control
The swarm behavior strategies and corresponding control laws introduced in Section III-A and detailed in Appendix A treat each robot in the swarm as if it could move omnidirectionally. That is, if we denote by the position of a robot, then its movement could be expressed using single integrator dynamics,
with denoting the control action given by the chosen behavior. However, the differential drive configuration of the GRITSBot implies that it cannot execute single integrator dynamics. Instead, the motion of a differential drive robot is described by the so-called unicycle dynamics,
with being the robot’s cartesian position and its orientation in the plane. The control inputs, and , correspond to the linear and angular velocities of the robot, respectively, as shown in Fig. 2.
In order to convert the input in (1) into the executable control commands in (III-B), we use the near-identity diffeomorphism in [Olfati-Saber2002]. The details of this transformation are described in detail in Appendix B. Using this transformation between the single integrator and the unicycle dynamics, we get to tune two scalar parameters, and , that regulate how smooth the movement trace of each robot is and how fast it travels when executing a certain control input, respectively. Figure 9 illustrates the differences between directly executing the single integrator dynamics in (1), and performing two different diffeomorphisms on the single integrator control value, . We can observe how choosing a small value for the diffeomorphism parameter results in an angular movement trace, while a smooth trajectory is observed when selecting a bigger value for this parameter.
Given the ability to regulate the angularity and the speed of the movement trace of a robot, we are in a position to implement the movement features included in Tables I and II. The smoothness feature of the happiness emotion in Table I is translated into a smooth and fast individual control. Analogous diffeomorphism parameters are chosen to show surprise, given the roundness and very big size attributes associated with this emotion. As for sadness, even though it is a negative emotion and Table II associates angular movement trace with such valence, we focus in the more specific characterizations provided in Table I to characterize the motion as slow and smooth. We can observe how, indeed, the trajectories depicted in Figs. 3, 4 and 5 are smooth given the choice of a large in the diffeomorphism. The speed of the robots is illustrated by the total distance covered in time: while significant distances are traveled within 4 seconds for the behaviors of happiness and surprise, the robots in the sadness behavior displace very little in 8 seconds.
|Emotion||Swarm Behavior||Robot Control|
|Happiness||sinusoid over circle||fast, smooth|
|Surprise||expanding circle||fast, smooth|
|Sadness||small circle||slow, smooth|
|Fear||coverage: uniform||slow, angular|
|Disgust||coverage: = boundaries||slow, angular|
|Anger||coverage: Gaussian||fast, angular|
Table II associates an angular movement trace with the emotions with negative valence. Consequently, a controller that produces an angular movement trace, corresponding to a small in the diffeomorphism, is selected for the remaining emotions—fear, disgust and anger. The movement features presented in Table I for anger and fear are translated into fast and slow control, respectively. Given the lack of characterization for the speed of disgust, we opt to implement a slow motion. We can observe how, for Figs. 6-8, the trajectory traces have sharp turns and angularities, specially in the case of the anger behavior, which is accentuated by the proportional gain corresponding to a large velocity.
Iv User Study
The behaviors described in Section III were implemented in simulation on a team of 15 differential drive robots, producing a video for each of the emotions. Snapshots generated from each of the videos, along with the URL links, are included in Figs. 3 to 8.
A user study was conducted to evaluate if the swarm interactions and individual robot control strategies selected in Section III produce expressive swarm behaviors that correspond to the fundamental emotions. The hypothesis to test was the following,
- H1: Overall Classification.
Participants will perform better than chance in identifying the fundamental emotion each swarm behavior is intended to represent.
A total of 45 subjects (32 males and 13 females) participated in the study, with 29 of them not having any academic or professional background in robotics. After responding to the demographic questions, each subject was shown 6 videos, each of them corresponding to the behaviors designed for each of the fundamental emotions. The videos were shown sequentially, one behavior at a time, and in a random order. After watching each video, the human subject was presented with a multiple choice (single answer) question to select the emotion that best described the movement of the robots in the video, with the possible answers being the 6 fundamental emotions. The users had no time limit when classifying the videos and they were allowed to rewatch them as many times as desired.
Iv-B Results and Discussion
The responses of the survey were collected and summarized in Table IV. The columns are labeled proposed emotion and each of them contains the responses given to the video of the behavior designed for a fundamental emotion. In the confusion matrix in Table IV, the emotions are ordered counterclockwise from positive to negative valence according to the circumplex model in Fig. 3.
The diagonal terms of the confusion matrix, boldfaced in Table IV, correspond to the percentage of responses that identified the emotion in the video as the one intended by the authors. For all the diagonal values, the percentage is much higher that the one given by chance (16.67%), and in most cases—happiness, sadness, anger and surprise—this value reaches the absolute majority (greater than 50%). In the cases of fear and disgust, while the relative majority of the responses identified the emotion according to our hypothesis (40% for both emotions), the values are lower than 50%. This can be potentially caused by the proximity of such emotions in terms of valence and arousal, as illustrated in Fig. 3.
Based on the demographic data collected, the accuracy of the results was not affected significantly by the robotics background of the subjects. As shown in Fig. 11, for the 4 emotions for which the majority of the aggregate responses in Table IV aligned with the hypothesis—i.e., happiness, surprise, anger and sadness—all subjects, regardless of their background in robotics, identified the emotions according to the hypothesis in more than 50% of the cases. For the emotions of fear and disgust, for which the lowest accuracies are observed in Table IV, the responses aligned better with the hypothesis for those subjects without a robotics background, but no significant deviations were observed between the two groups. In contrast, when performing an analysis by gender, the accuracy of the responses with respect to the hypothesis was consistently larger in the case of female subjects, as shown in Fig. 12. As seen for all the swarm behaviors, the accuracy was higher among the female participants, being in 5 out of the 6 emotions higher than 50%. Only in the case of fear the accuracy for the female participants was slightly under the majority threshold (46.15%). Thus, while all the male responses still validated hypothesis H1, the results show that the motion and shape characterizations selected for the swarm behaviors were more clearly identified by the female observers compared to the male ones.
As seen above, the data collected in the user study unanimously supports hypothesis H1, thus confirming that the swarm behaviors and individual robot control paradigms designed in Section III effectively depict each of the fundamental emotions. Therefore, the behaviors considered in this study provide a collection of motion primitives for robotic swarms to effectually convey emotions in artistic expositions.
In this paper, we investigated how motion and shape descriptors from social psychology can be integrated into the control laws of a swarm of robots to express fundamental emotions. Based on such descriptors, a series of swarm behaviors were developed, and their effectiveness in depicting each of the fundamental emotions was analyzed in a user study. The results of the survey showed that, for all the swarm behaviors created, the relative majority of the subjects classified each behavior with the corresponding emotion according to the hypothesis, being this ratio over 50% for 4 of the 6 fundamental emotions. Some confusion was observed in the classification of the behaviors of fear and disgust, which can be attributed both to the similarity between both emotions in terms of valence and arousal, as well as to the lack of descriptors existent in the literature for the disgust emotion, which complicated the characterization of its associated swarm behavior. Further analysis of the results showed that the robotics background of the participants had no influence on the classification of the behaviors, while the responses of the female participants were more aligned with the hypothesis in comparison to their male counterparts. In conclusion, the motion and shape descriptors extracted from social psychology afforded the development of distinct expressive swarm behaviors, identifiable by human observers under one of the fundamental emotions, thus providing a starting point for the design of expressive behaviors for robotic swarms to be used in artistic expositions.
Appendix A Swarm behaviors
In Section III-A, a series of swarm behaviors were designed based on the movement and shape attributes associated with the different fundamental emotions. This appendix includes the mathematical expressions of the control laws used to produce the different swarm behaviors. Note that all the control laws included here treat each robot in the swarm as a point that can move omnidirectionally according to single integrator dynamics as in (1). The transformation from single integrator dynamics to unicycle dynamics is discussed in detail in Appendix B.
The swarm movement selected for the happiness behavior consists of the robots following the contour of a circle with a superimposed sinusoid. This shape is illustrated in Fig. (a)a and can be parameterized as
where is the radius of the main circle and and are the amplitude and frequency of the superposed sinusoid, respectively. For the shape in Fig. (a)a, the frequency of the superimposed sinusoid is .
If we have a swarm of robots, we can initially position Robot according to
Then the team will depict the desired shape if each robot follows a point evolving along the contour in (A-A),
with a function of time ,
In the case of the surprise emotion, each robot follows a point moving along a circle with expanding radius, as in Fig. (b)b. Such shape can be parameterized as,
to create a radius that expands from to .
Analogously to the procedure described in Section A-A, in this case the robots can be initially located at
with given by (3). The controller for each robot is then given by,
with as in (4).
A-D Anger, Fear and Disgust
For the remaining emotions—anger, disgust and fear—the swarm coordination is based on the coverage control strategy, which allows the user to define which areas the robots should concentrate around.
If we denote by the domain of the robots, the areas where we want to position the robots can be specified by defining a density function, , that assigns higher values to those areas where we desire the robots to concentrate around. We can make the robots distribute themselves according to this density function by implementing a standard coverage controller such as [Cortes04], where
where denotes the aggregate positions of the robots and is a proportional gain. In the controller in (A-D), denotes the center of mass of the Voronoi cell of Robot ,
with the Voronoi cell being characterized as,
Appendix B Individual Robot Control
The swarm behaviors described in Appendix A assume that each robot in the swarm can move omnidirectionally according to
with the Cartesian position of Robot in the plane and the desired velocity. However, the GRITSBot (Fig. 1) has a differential-drive configuration and cannot move omnidirectionally as its motion is constrained in the direction perpendicular to its wheels. Instead, its motion can be expressed as unicycle dynamics,
with the orientation of Robot and the linear and angular velocities executable by the robot, as shown in Fig. 15.
A graphical representation of this transformation is included in Fig. 15: the input is applied to a point located at a distance of in front of the robot, , which can move according to the single integrator dynamics in (7). The effect of this parameter in the movement of the robot is illustrated in Fig. 9. The parameter acts as a proportional gain.