From Motions to Emotions: Can the Fundamental Emotions be Expressed in a Robot Swarm?

03/28/2019 ∙ by María Santos, et al. ∙ 0

This paper explores the expressive capabilities of a swarm of miniature mobile robots within the context of inter-robot interactions and their mapping to the so-called fundamental emotions. In particular, we investigate how motion and shape descriptors that are psychologically associated with different emotions can be incorporated into different swarm behaviors for the purpose of artistic expositions. Based on these characterizations from social psychology, a set of swarm behaviors is created, where each behavior corresponds to a fundamental emotion. The effectiveness of these behaviors was evaluated in a survey in which the participants were asked to associate different swarm behaviors with the fundamental emotions. The results of the survey show that most of the research participants assigned to each video the emotion intended to be portrayed by design. These results confirm that abstract descriptors associated with the different fundamental emotions in social psychology provide useful motion characterizations that can be effectively transformed into expressive behaviors for a swarm of simple ground mobile robots.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 7

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Robots have progressively migrated from purely industrial environments to more social settings where they interact with humans in quotidian activities such as education [Brown2013], companionship [Belpaeme2013, Hoffman2013], or health care and therapy [Cabibihan2013, Kozima2009]. In these scenarios, on top of performing tasks related to the specific application, there may be a need for the robots to effectively interact with people in an entertaining, engaging, or anthropomorphic manner [Breazeal2003].

The need for enticing interactions between social robots and humans becomes especially pronounced in artistic applications. Robots have been progressively intertwined with different forms of artistic expression, where they are used, among others, to interactively create music [Hoffman2010], dance [Bi2018, LaViers2018, Nakazawa2002, Shinozaki2008], act in plays [Lee2014, Perkowski2005, Sunardi2018], support performances [Ackerman2014], or be the object of art exhibits by themselves [Dean2008]. As in the traditional expressions of these performing arts, where human artists instill expressive and emotional content [Camurri2004, Juslin2005], robots are required to convey artistic expression and emotion through their actions.

While expressive interactions have been extensively studied in the context of performing arts, the focus has been primarily on anthropomorphic robots, especially humanoids [Lee2014, Or2009, Perkowski2013]

. However, for faceless robots or robots with limited degrees of freedom for which mimicking human movement is not an option, creating expressive behaviors can pose increased difficulty

[Bretan2015, Hoffman2008, Schoellig2014]. We are interested in exploring the expressive capabilities of a swarm of miniature mobile robots, as opposed to robots with some kind of anthropomorphism, for which there is already a preconceived understanding of emotive expressiveness. This choice is driven in part by the increased prevalence of multi-robot applications and the envisioned, resulting large-scale human-robot teams [Goodrich2007HRIsurvey, Kolling2016, Sheridan2016]; and in part by the expressive possibilities of the swarm as a collective in contrast to the robots as individuals. While using teams of mobile robots to create artistic effects in performances is not something new [Ackerman2014, Alonso-Mora2014], our aim is to provide a framework to use these types of robotic teams in performances without the need for a choreographer to specify the parameters of the robots’ movements, as in [Schoellig2014].

Emotion Shape Features Movement Features Size
Happiness roundness, curvilinearity [Collier1996] smoothness [Lee2007] big [DeRooij2013]
Surprise roundness [Collier1996] very big [DeRooij2013]
Sadness roundness [Collier1996] small, slow [Pollick2001, Rime1985] small [DeRooij2013]
Anger large, fast, angular [Pollick2001]
Fear downward pointing triangles [Aronoff2006] small, slow [Pollick2001, Rime1985]
TABLE I: Movement and shape attributes associated with different fundamental emotions.

Social psychology has extensively studied which motion and shape descriptors are associated with different fundamental emotions, e.g. [Collier1996, Lee2007, Pollick2001, Rime1985, Ekman1993]. In this paper, we study how such attributes can be incorporated into the movements of a swarm of mobile robots to represent different emotions. The paper is organized as follows: In Section II, we outline the motion and shape characteristics psychologically linked to the different fundamental emotions. The behaviors included in the user study, implemented on the swarm according to the features described in the social psychology literature, are characterized in Section III. The procedure and results of the study conducted with human subjects are presented in Section IV, along with the discussion. Section V concludes the paper.

Ii Emotionally Expressive Movement

For robotic swarms to participate in artistic expositions and effectively convey emotional content, the swarm’s behavior when depicting a particular emotion should be recognizable by the audience, thus producing the effect intended by the artist. However, the lack of anthropomorphism in a robotic swarm can pose a challenge when creating expressive motions for human spectators. In this section, we present a summary of motion and shape features that have been linked to different emotions in the social psychology literature, which will serve as inspiration to create expressive behaviors for swarms of mobile robots.

In this study, we focus on the so-called fundamental emotions [Ekman1993, Izard2009]—i.e. happiness, sadness, anger, fear, surprise and disgust—to produce a tractable set of emotion behaviors to be executed by the robotic swarm. An emotion is considered fundamental or basic if it is inherent to human mentality and adaptive behavior, and remains recognizable across cultures [Izard1977]. In addition, fundamental emotions provide a basis for a wider range of human emotions, which appear at the intersection of the basic emotions with varying intensities [Plutchik2001].

Fig. 1: The GRITSBot, a 3cmx3cm miniature mobile differential drive robot. The robotic swarm considered in this study is composed of 15 GRITSBots. The top view of these robots is used in the simulations shown to the study participants when evaluating the different swarm behaviors.

The robotic system considered for this study is a swarm of miniature differential-drive robots, the GRITSBots [Pickem15]. As shown in Fig. 1, the GRITSBots are faceless robots that do not possess any anthropomorphic features. For this reason, we draw inspiration from abstract shape and motion descriptors associated with different fundamental emotions [DeRooij2013] to create different swarm behaviors. To this end, Table I presents a summary of shape, movement and size attributes of abstract objects associated with some of the fundamental emotions, as compiled in different studies [Collier1996, Lee2007, DeRooij2013, Pollick2001, Rime1985, Aronoff2006].

While the summary in Table I provides a good starting point for generating swarm behaviors for most fundamental emotions, motion characterizations of disgust remain scarce the literature. In order to get some intuition about which traits the swarm behavior should portray when embodying this emotion, we direct our attention towards characterizations associated with emotion valence111In this context, the term valence is used to designate the intrinsic attractiveness (positive valence) or aversiveness (negative valence) of an event, object, or situation [Frijda1986]

. The valence of an emotion thus characterizes its positive or negative connotation. Among the fundamental emotions, happiness and surprise have positive valence, while the remaining four—sadness, fear, disgust and anger—are classified under negative valence

[Russell1980].. The shape and motion characterizations of positive and negative emotion valences in Table II serve as a basis to design the swarm behavior associated with disgust.

Valence Shape Features Movement Features
Positive roundness rounded movement trace
Negative angularity angular movement trace
TABLE II: Movement and shape attributes associated with the valence of an emotion [Collier1996, Aronoff2006].
Fig. 2: The behavior of a robotic swarm depends on which interactions are considered between the robots, which information is exchanged through those interactions, and how each robot acts on such information. Different interaction schemes and control laws produce distinctly different swarm behaviors.

The behavior of a robotic swarm depends on how the interactions are established between members of the swarm and what control commands are executed by the individuals based on the information exchanged in those interactions, as illustrated in Fig. 2. While the GRITSBots as individuals cannot change their shape, the collective behavior of the swarm may embody the shape and size attributes included in Tables I and II. On the other hand, the movement features in Tables I and II can be depicted through the movement trace—interpreted as the trajectory taken by the robot over time—that each individual robot executes as it progresses towards the collective shape. In the next section, we describe how all these attributes are implemented in the controller of the robots to produce the behaviors that embody the different fundamental emotions.

Iii Swarm Behavior Design

(a) s
(b) s
(c) s
Fig. 3: Sequence of snapshots of the happiness behavior. Each robot follows a point that travels along a circular sinusoid, visually producing a circular shape with small ripples. The trajectories of five robots have been plotted using solid lines. See the full video at https://youtu.be/q_FenI1DdRY.
(a) s
(b) s
(c) s
Fig. 4: Sequence of snapshots of the surprise behavior. The robots move along a circle of expanding radius, thus creating a spiral effect. The trajectories of five robots have been plotted using solid lines. See the full video at https://youtu.be/VYIJ5hBeOIU.
(a) s
(b) s
(c) s
Fig. 5: Sequence of snapshots of the sadness behavior. The robots move along a small circle at a low speed. The trajectories of five robots have been plotted using solid lines. After 8 seconds, each robot has only displaced approximately an eighth of the circumference. See the full video at https://youtu.be/rfHZcFnRFg8.
(a) s
(b) s
(c) s
Fig. 6: Sequence of snapshots of the fear behavior. The robots spread out uniformly over the domain. As it can be observed from the trajectories, they displace slowly with a non-smooth, angular movement trace. See the full video at https://youtu.be/jz-5INUd8wc.
(a) s
(b) s
(c) s
Fig. 7: Sequence of snapshots of the disgust behavior. The robots spread out slowly towards the boundaries of the domain, with a trajectory with a non-smooth, angular trace. See the full video at https://youtu.be/EprfuCsuuRM.
(a) s
(b) s
(c) s
Fig. 8: Sequence of snapshots of the anger behavior. The density function is defined as a Gaussian at the center of the domain, causing the robots to concentrate around this area. However, the fact that the robots move with high speed causes overshoots in their positions, thus producing a significantly angular movement trace. See the full video at https://youtu.be/kAGBrMkOtyY.

For our swarm of robots to be expressive, we need to decide which interactions a robot should establish with the robots in its vicinity and its environment, and which control law the robot should execute with the information obtained through those interactions to produce an appropriate swarm behavior. In this paper, we draw inspiration from standard algorithms for multi-robot teams, namely cyclic pursuit [Justh2003, Marshall2004, Ramirez2009] and coverage control [Cortes04, DiazMercado2015], to design the interactions and the control laws for the swarm. This section describes how the shape and movement features described in Section II are incorporated into the control laws of a swarm of 15 GRITSBots in order to create expressive behaviors.

Iii-a Collective Behavior

The attributes presented in Section II characterize how the motion and shape of an abstract object can convey emotion. Here we treat the GRITSBots as objects capable of reconfiguring themselves on a stage in order to generate an expressive behavior.

Among the attributes presented in Tables I and II, it seems natural for those related to shape and size to be depicted by the collective behavior of the swarm, given that the individual robots can move within the planar environment but cannot change their individual shape. To this end, the feature of roundness is incorporated into the behaviors of happiness, surprise and sadness. Those behaviors are thus based on the robots following some kind of circular contour, as illustrated in Figs. 3, 4 and 5, respectively. In the case of the happiness behavior, a sinusoid is superimposed to the base shape of a circle, producing ripples on the circle contour to embody the curvilinearity feature; and the corresponding size attribute—big—is incorporated through the circle dimensions with respect to the domain. As for the surprise emotion, the very big size attribute was included in the behavior by making the radius of the circle grow with time, thus producing a sensation of increasing size. Finally, the circular path dimension was reduced (small attribute) in the case of the sadness behavior, incorporating also the slowness attribute by making the robots follow the contour at a very low speed.

The scarcity of shape characterizations for the other three emotions—fear, disgust and anger—motivates a different approach for the design of the collective behavior of the swarm. For these emotions, we choose to specify which areas of the domain the robots should concentrate around. We do so by defining a density function, , that characterizes the areas of the domain where we want the robots to group. In all three behaviors, the robots are initially distributed at random positions within the domain to then spread according to the particular density function selected. In the case of fear, the density function is uniform across the domain, so that it makes the robots scatter as far as possible from their neighbors, as shown in Fig. 6. For the disgust motion, Fig. 7, the density is chosen to be high around the boundaries, making the robots move from the center towards the exterior of the domain—the stage—, giving the sensation of animosity between robots. Finally, in order to show anger, the robots are made to stay closer to the center of the domain. This strategy, combined with the individual robot control that will be explained in Section III-B, is intended to give the sensation of a heated environment, a riot.

The control laws needed to achieve these behaviors are explained in detail in Appendix A. In each of those laws, a robot in the swarm is treated as a point that can move omnidirectionally. However, the GRITSBots (see Fig. 1) are differential drive robots and, thus, are unable to move perpendicularly to the direction of their wheels. This movement restriction is used to our advantage in the individual control strategies described in Section III-B, where we exploit the limitations on the planar movement of the differential drive robots to implement the movement features in Tables I and II.

Iii-B Individual Robot Control

The swarm behavior strategies and corresponding control laws introduced in Section III-A and detailed in Appendix A treat each robot in the swarm as if it could move omnidirectionally. That is, if we denote by the position of a robot, then its movement could be expressed using single integrator dynamics,

(1)

with denoting the control action given by the chosen behavior. However, the differential drive configuration of the GRITSBot implies that it cannot execute single integrator dynamics. Instead, the motion of a differential drive robot is described by the so-called unicycle dynamics,

(2)

with being the robot’s cartesian position and its orientation in the plane. The control inputs, and , correspond to the linear and angular velocities of the robot, respectively, as shown in Fig. 2.

In order to convert the input in (1) into the executable control commands in (III-B), we use the near-identity diffeomorphism in [Olfati-Saber2002]. The details of this transformation are described in detail in Appendix B. Using this transformation between the single integrator and the unicycle dynamics, we get to tune two scalar parameters, and , that regulate how smooth the movement trace of each robot is and how fast it travels when executing a certain control input, respectively. Figure 9 illustrates the differences between directly executing the single integrator dynamics in (1), and performing two different diffeomorphisms on the single integrator control value, . We can observe how choosing a small value for the diffeomorphism parameter results in an angular movement trace, while a smooth trajectory is observed when selecting a bigger value for this parameter.

Given the ability to regulate the angularity and the speed of the movement trace of a robot, we are in a position to implement the movement features included in Tables I and II. The smoothness feature of the happiness emotion in Table I is translated into a smooth and fast individual control. Analogous diffeomorphism parameters are chosen to show surprise, given the roundness and very big size attributes associated with this emotion. As for sadness, even though it is a negative emotion and Table II associates angular movement trace with such valence, we focus in the more specific characterizations provided in Table I to characterize the motion as slow and smooth. We can observe how, indeed, the trajectories depicted in Figs. 3, 4 and 5 are smooth given the choice of a large in the diffeomorphism. The speed of the robots is illustrated by the total distance covered in time: while significant distances are traveled within 4 seconds for the behaviors of happiness and surprise, the robots in the sadness behavior displace very little in 8 seconds.

Fig. 9: Effect of the diffeomorphism parameter, , on the movement trace of an individual robot. In all cases, the controller is following a particle that moves along the black dashed line—the desired trajectory. The top figure illustrates how an agent capable of executing the single integrator dynamics in 1 follows closely the desired trajectory. The other two trajectories, in blue, illustrate two different diffeomorphisms performed over the control action of the single integrator. In the middle, a small value of results in an angular movement trace that follows quite closely the desired trajectory. In contrast, at the bottom, a large value of results on a very smooth movement trace, at the expense of following more loosely the desired trajectory.
Emotion Swarm Behavior Robot Control
Happiness sinusoid over circle fast, smooth
Surprise expanding circle fast, smooth
Sadness small circle slow, smooth
Fear coverage: uniform slow, angular
Disgust coverage: = boundaries slow, angular
Anger coverage: Gaussian fast, angular
TABLE III: Motion and shape attributes selected for the behaviors associated with the fundamental emotions.
Proposed Emotion
Happiness Surprise Anger Fear Disgust Sadness
Happiness 64.44 17.78 8.89 4.44 4.44 13.33
Surprise 11.11 57.78 8.89 2.22 0.00 0.00
Anger 8.89 0.00 55.56 13.33 15.56 4.44
Fear 6.67 13.33 20.00 40.00 35.56 15.56
Disgust 6.67 4.44 4.44 26.67 40.00 2.22

 Response (%)

Sadness 2.22 6.67 2.22 13.33 4.44 64.44
TABLE IV: Confusion matrix calculated with the survey responses.

Table II associates an angular movement trace with the emotions with negative valence. Consequently, a controller that produces an angular movement trace, corresponding to a small in the diffeomorphism, is selected for the remaining emotions—fear, disgust and anger. The movement features presented in Table I for anger and fear are translated into fast and slow control, respectively. Given the lack of characterization for the speed of disgust, we opt to implement a slow motion. We can observe how, for Figs. 6-8, the trajectory traces have sharp turns and angularities, specially in the case of the anger behavior, which is accentuated by the proportional gain corresponding to a large velocity.

The swarm behavior selected for each of the emotions according to the shape characterizations discussed in Section III-A and the diffeomorphism parameters in this section are summarized in Table III.

Iv User Study

The behaviors described in Section III were implemented in simulation on a team of 15 differential drive robots, producing a video for each of the emotions. Snapshots generated from each of the videos, along with the URL links, are included in Figs. 3 to 8.

Iv-a Procedure

A user study was conducted to evaluate if the swarm interactions and individual robot control strategies selected in Section III produce expressive swarm behaviors that correspond to the fundamental emotions. The hypothesis to test was the following,

H1: Overall Classification.

Participants will perform better than chance in identifying the fundamental emotion each swarm behavior is intended to represent.

A total of 45 subjects (32 males and 13 females) participated in the study, with 29 of them not having any academic or professional background in robotics. After responding to the demographic questions, each subject was shown 6 videos, each of them corresponding to the behaviors designed for each of the fundamental emotions. The videos were shown sequentially, one behavior at a time, and in a random order. After watching each video, the human subject was presented with a multiple choice (single answer) question to select the emotion that best described the movement of the robots in the video, with the possible answers being the 6 fundamental emotions. The users had no time limit when classifying the videos and they were allowed to rewatch them as many times as desired.

Fig. 10: Representation of the survey responses in the valence-arousal333In this context, the term arousal designates the activation or deactivation associated with an emotion. plane. The location of each emotion is represented with a color-coded cross according to the circumplex model of affect [Ross1938, Russell1980]. Next to each emotion, a sequence of color-coded circles represent how the human subjects identify each behavior, with the diameter of each circle being proportional to the amount of responses given to the corresponding emotion. We can observe how, in general, the majority of users labels the behavior according to the proposed emotion, with most variations occurring generally with those emotions closest in the plane. In the cases of fear and disgust, while the relative majority of subjects still labels their behaviors according to the hypothesis, we observe a significant amount of confusion among them, which may be due to the proximity of such emotions in terms of valence and arousal.

Iv-B Results and Discussion

The responses of the survey were collected and summarized in Table IV. The columns are labeled proposed emotion and each of them contains the responses given to the video of the behavior designed for a fundamental emotion. In the confusion matrix in Table IV, the emotions are ordered counterclockwise from positive to negative valence according to the circumplex model in Fig. 3.

The diagonal terms of the confusion matrix, boldfaced in Table IV, correspond to the percentage of responses that identified the emotion in the video as the one intended by the authors. For all the diagonal values, the percentage is much higher that the one given by chance (16.67%), and in most cases—happiness, sadness, anger and surprise—this value reaches the absolute majority (greater than 50%). In the cases of fear and disgust, while the relative majority of the responses identified the emotion according to our hypothesis (40% for both emotions), the values are lower than 50%. This can be potentially caused by the proximity of such emotions in terms of valence and arousal, as illustrated in Fig. 3.

Fig. 11: Percentage of subjects that identified each emotion in the video according to the hypothesis, classified according the robotics background of the subjects. There is no substantial difference between the responses given by the subjects that had experience studying or researching in robotics and those who did not.

Based on the demographic data collected, the accuracy of the results was not affected significantly by the robotics background of the subjects. As shown in Fig. 11, for the 4 emotions for which the majority of the aggregate responses in Table IV aligned with the hypothesis—i.e., happiness, surprise, anger and sadness—all subjects, regardless of their background in robotics, identified the emotions according to the hypothesis in more than 50% of the cases. For the emotions of fear and disgust, for which the lowest accuracies are observed in Table IV, the responses aligned better with the hypothesis for those subjects without a robotics background, but no significant deviations were observed between the two groups. In contrast, when performing an analysis by gender, the accuracy of the responses with respect to the hypothesis was consistently larger in the case of female subjects, as shown in Fig. 12. As seen for all the swarm behaviors, the accuracy was higher among the female participants, being in 5 out of the 6 emotions higher than 50%. Only in the case of fear the accuracy for the female participants was slightly under the majority threshold (46.15%). Thus, while all the male responses still validated hypothesis H1, the results show that the motion and shape characterizations selected for the swarm behaviors were more clearly identified by the female observers compared to the male ones.

Fig. 12: Percentage of subjects that successfully assigned the emotion to the corresponding video, according to the hypothesis, according to the gender of the participants. We can observe how the responses of the female subjects are consistently more aligned with the hypothesized behavior for each of the videos.

As seen above, the data collected in the user study unanimously supports hypothesis H1, thus confirming that the swarm behaviors and individual robot control paradigms designed in Section III effectively depict each of the fundamental emotions. Therefore, the behaviors considered in this study provide a collection of motion primitives for robotic swarms to effectually convey emotions in artistic expositions.

(a) Happiness: The robots follow points moving along a circle of radius with a superposed sinusoid of amplitude .
(b) Surprise: The robots follow points moving along a circle of expanding radius. Two snapshots, corresponding to , are shown here.
(c) Sadness: The robots follow points that move slowly along the contour of a small circle with respect to the dimensions of the domain.
Fig. 13: Shapes selected for the happiness, surprise and sadness swarm behaviors. Each agent—here depicted as a red circle—follows a point (black circle) that moves along the dashed trajectory. The go-to-go controller that makes each agent follow the corresponding point is illustrated with blue arrows for 3 of the agents.

V Conclusions

In this paper, we investigated how motion and shape descriptors from social psychology can be integrated into the control laws of a swarm of robots to express fundamental emotions. Based on such descriptors, a series of swarm behaviors were developed, and their effectiveness in depicting each of the fundamental emotions was analyzed in a user study. The results of the survey showed that, for all the swarm behaviors created, the relative majority of the subjects classified each behavior with the corresponding emotion according to the hypothesis, being this ratio over 50% for 4 of the 6 fundamental emotions. Some confusion was observed in the classification of the behaviors of fear and disgust, which can be attributed both to the similarity between both emotions in terms of valence and arousal, as well as to the lack of descriptors existent in the literature for the disgust emotion, which complicated the characterization of its associated swarm behavior. Further analysis of the results showed that the robotics background of the participants had no influence on the classification of the behaviors, while the responses of the female participants were more aligned with the hypothesis in comparison to their male counterparts. In conclusion, the motion and shape descriptors extracted from social psychology afforded the development of distinct expressive swarm behaviors, identifiable by human observers under one of the fundamental emotions, thus providing a starting point for the design of expressive behaviors for robotic swarms to be used in artistic expositions.

Appendix A Swarm behaviors

In Section III-A, a series of swarm behaviors were designed based on the movement and shape attributes associated with the different fundamental emotions. This appendix includes the mathematical expressions of the control laws used to produce the different swarm behaviors. Note that all the control laws included here treat each robot in the swarm as a point that can move omnidirectionally according to single integrator dynamics as in (1). The transformation from single integrator dynamics to unicycle dynamics is discussed in detail in Appendix B.

A-a Happiness

The swarm movement selected for the happiness behavior consists of the robots following the contour of a circle with a superimposed sinusoid. This shape is illustrated in Fig. (a)a and can be parameterized as

where is the radius of the main circle and and are the amplitude and frequency of the superposed sinusoid, respectively. For the shape in Fig. (a)a, the frequency of the superimposed sinusoid is .

If we have a swarm of robots, we can initially position Robot according to

with

(3)

Then the team will depict the desired shape if each robot follows a point evolving along the contour in (A-A),

with a function of time ,

(4)

A-B Surprise

In the case of the surprise emotion, each robot follows a point moving along a circle with expanding radius, as in Fig. (b)b. Such shape can be parameterized as,

with

to create a radius that expands from to .

Analogously to the procedure described in Section A-A, in this case the robots can be initially located at

with given by (3). The controller for each robot is then given by,

(5)

with as in (4).

(a) Anger: the Gaussian density makes the robots concentrate around the center of the domain. This choice, along with the selection of a large proportional gain in the diffeomorphism in (B), makes the robots stay in each other’s vicinity and react to each others movement, producing a jarring movement trace.
(b) Disgust: the density function presents high values along the boundaries of the domain. This choice allows the team to spread along the boundary, giving the sensation of animosity between robots.
(c) Fear: the density function is chosen to be uniform across the domain. With this choice, the robots scatter evenly over the domain from their initial positions.
Fig. 14: Density functions associated to represent the emotions of anger LABEL:sub@subfig:anger, disgust LABEL:sub@subfig:disgust and fear LABEL:sub@subfig:fear. The higher the density (darker color), the higher the concentration of robots will be in that area. The red circles represent the position of the agents once the control law in (A-D) has converged.

A-C Sadness

For the case of the sadness emotion, the robots move along a circle of small dimension as compared to the domain. The strategy is analogous to the ones in (A-A) and (5), with the parameterization of the contour given by,

(6)

A-D Anger, Fear and Disgust

For the remaining emotions—anger, disgust and fear—the swarm coordination is based on the coverage control strategy, which allows the user to define which areas the robots should concentrate around.

If we denote by the domain of the robots, the areas where we want to position the robots can be specified by defining a density function, , that assigns higher values to those areas where we desire the robots to concentrate around. We can make the robots distribute themselves according to this density function by implementing a standard coverage controller such as [Cortes04], where

where denotes the aggregate positions of the robots and is a proportional gain. In the controller in (A-D), denotes the center of mass of the Voronoi cell of Robot ,

with the Voronoi cell being characterized as,

Fig. 14 shows the densities selected for each of the emotions, where the red circles represent the positions of the robots in the domain upon convergence, achieved by running the controller in (A-D).

Appendix B Individual Robot Control

The swarm behaviors described in Appendix A assume that each robot in the swarm can move omnidirectionally according to

(7)

with the Cartesian position of Robot in the plane and the desired velocity. However, the GRITSBot (Fig. 1) has a differential-drive configuration and cannot move omnidirectionally as its motion is constrained in the direction perpendicular to its wheels. Instead, its motion can be expressed as unicycle dynamics,

(8)

with the orientation of Robot and the linear and angular velocities executable by the robot, as shown in Fig. 15.

Fig. 15: Parameters involved in the near-identity diffeomorphism in (B), used to transform the single integrator dynamics in (7) into unicycle dynamics (B), executable by the GRITSBots. The pose of the robot is determined by its position, , and its orientation, . The single integrator control, , is applied to a point located at a distance in front of the robot. The linear and angular velocities, and , that allow the robot to track are obtained applying the near-identity diffeomorphism in (B).

In this paper, the single integrator dynamics in (7) are converted into unicycle dynamics, as in (B), using a near-identity diffeomorphism [Olfati-Saber2002],

A graphical representation of this transformation is included in Fig. 15: the input is applied to a point located at a distance of in front of the robot, , which can move according to the single integrator dynamics in (7). The effect of this parameter in the movement of the robot is illustrated in Fig. 9. The parameter acts as a proportional gain.

References