DeepAI
Log In Sign Up

A General, Evolution-Inspired Reward Function for Social Robotics

02/01/2022
by   Thomas Kingsford, et al.
0

The field of social robotics will likely need to depart from a paradigm of designed behaviours and imitation learning and adopt modern reinforcement learning (RL) methods to enable robots to interact fluidly and efficaciously with humans. In this paper, we present the Social Reward Function as a mechanism to provide (1) a real-time, dense reward function necessary for the deployment of RL agents in social robotics, and (2) a standardised objective metric for comparing the efficacy of different social robots. The Social Reward Function is designed to closely mimic those genetically endowed social perception capabilities of humans in an effort to provide a simple, stable and culture-agnostic reward function. Presently, datasets used in social robotics are either small or significantly out-of-domain with respect to social robotics. The use of the Social Reward Function will allow larger in-domain datasets to be collected close to the behaviour policy of social robots, which will allow both further improvements to reward functions and to the behaviour policies of social robots. We believe this will be the key enabler to developing efficacious social robots in the future.

READ FULL TEXT VIEW PDF

page 13

page 14

page 15

10/06/2020

Reward Machines: Exploiting Reward Function Structure in Reinforcement Learning

Reinforcement learning (RL) methods usually treat reward functions as bl...
06/07/2022

Variational Meta Reinforcement Learning for Social Robotics

With the increasing presence of robots in our every-day environments, im...
09/21/2020

Reinforcement Learning Approaches in Social Robotics

There is a growing body of literature that formulates social human-robot...
09/02/2022

Co-Imitation: Learning Design and Behaviour by Imitation

The co-adaptation of robots has been a long-standing research endeavour ...
10/16/2018

Social Behavior Learning with Realistic Reward Shaping

Deep reinforcement learning has been widely applied in the field of robo...
07/23/2019

An extended framework for characterizing social robots

Social robots are becoming increasingly diverse in their design, behavio...
06/16/2022

How to talk so your robot will learn: Instructions, descriptions, and pragmatics

From the earliest years of our lives, humans use language to express our...

1 Introduction

Social robots aim to achieve tasks by interaction and collaboration with humans on a social level. This requires an understanding of social skills. But what are social skills? Human social skills are a means-to-an-end to facilitate communal and societal cooperation and have enabled humans to become the dominant species on Earth.

The field of social robotics has yet to leverage the fast-paced advancement that both broader robotics and machine learning have benefited from in recent years. We propose there are two missing pieces:

  1. Reinforcement learning

  2. Objective evaluation mechanisms

The first missing piece is the realisation that social robotics is a sequential decision making problem, and should be addressed using modern reinforcement learning approaches. This allows us to escape the limitations of systems designed by hand or by mimicry.

How do humans learn social skills? Humans can be thought of as a meta-learning system. Evolution (as a learning algorithm) has endowed individual humans with the ability to learn based on experience. This too is true of humans’ social skills. There is clearly poverty of stimulus. Humans aren’t merely general learners which are dropped into society as infants and learn everything from scratch. There is clearly a prior or a bias that is used to bootstrap learning. Humans have intrinsic desires, which encourage learning early in life. Some of these desires change as the human infant develops, and others remain. It is these intrinsic desires which enable human learning, including the learning of social skills, and which may be thought of as a reward function.

The difficulty here is of course that it is difficult for robots to directly learn social behaviours. A significant contributor to this difficulty is that simulation is largely infeasible in a social robotics context. While social robots themselves can be simulated, their environment (the humans they interact with) cannot be. We propose that, if it is possible for robots to learn social skills, it will be possible only by providing them with the same genetically endowed social cues as humans. Any less, the problem is too sample inefficient to be tractable. Any more and the problem is likely to be over-prescriptive of a particular solution, with no guarantees of the degree of optimality of that particular solution. Moreover, it will be important to provide them with the opportunity to learn in long-term real-world social deployments to accrue sufficient data.

If we wish for autonomous systems, such as social robots, to interact collaboratively and fluidly with humans in a social context, we have three options:

  1. Intricately design their interactions, based on engineers’ understanding

  2. Learn social behaviour as a byproduct of optimising with respect to collaborative tasks. This is akin to robotic learning taking the learning responsibilities of the individual human and evolution at large.

  3. Bootstrap by providing fundamental social capabilities as a reward signal in training, thus mimicking the learning environment genetically endowed upon humans, then learn. This is akin to robotic learning being restricted to the learning responsibilities of the individual human only, but not evolution.

We propose that rapid development in this field is feasible only by adopting this latter approach. By treating social robotics as a sequential decision making problem, solving it with modern reinforcement learning algorithms, but starting with capabilities as close to those genetically endowed upon humans as possible, we believe social robotics can begin to experience the rapid advancement seen in other fields of machine learning and robotics.

The second missing piece is the development of objective evaluation mechanisms. This has been exceedingly hard in social robotics. This would enable both standardised evaluations of systems (there is no current agreement of how to objectively and quantitatively evaluate a social robot’s performance) and reinforcement learning. Such objective evaluation mechanisms must be automatic and low latency,

We propose the Social Reward Function, a real-time real-valued reward function, designed to mimic to the best of our abilities those fundamental social capabilities endowed upon humans by evolution, and which can be used for both learning and evaluation of social robotic systems. An implementation is made publicly available as a Python library111https://github.com/TomKingsfordUoA/social-reward-function.

2 Designing a reward function

The design problem is to produce a sufficiently dense reward function based on our best guess of genetically endowed social cues. In principle, we want to avoid over-engineering the reward function as this amounts to assuming a solution and might result in optimal behaviour with respect to the designed reward function failing to be sufficiently socially efficacious. However, we simultaneously need to ensure there is sufficient information content in the reward signal that learning is tractable.

So, how do we determine the appropriate genetically endowed capabilities of humans? Two general approaches can be taken.

In the first, the development of humans in pre-adulthood is studied. Since infants have been exposed to minimal experience, it is likely that the capabilities they demonstrate are largely evolutionarily-endowed. Human infants are born in a significantly immature state and don’t reach physical and intellectual maturation for more than two decades. This is in stark contrast to many other animal species. This presents a risk in that social skills which are truly genetically endowed may be indistinguishable from those which are learned as biological maturation occurs in concert with experiential learning. Nonetheless, if we focus on studying the social skills of humans in infancy rather than childhood and beyond, the effect of experiential learning can be minimised and observed effects can be assumed to be attributable directly to evolution.

In the second, humans are studied across cultures for common social skills. It is likely that culture-agnostic social skills are either endowed by evolution or are learned but so general that their incorporation in a reward function doesn’t present a risk of over-engineering.

[17] conclude the following social capabilities are likely to be genetically endowed in humans:

  1. Face and face-like objects detection [28, 39], including facial expression recognition [8]. The generation of certain facial expressions is likely innate [41] and could suggest the detection of those facial expressions, but not others, is innate.

  2. Eye-like object detection and limited gaze following [40].

  3. Proprioceptive mimicry [28, 40].

  4. Biological motion detection [40].

  5. Prenatal maternal/foetal physiological bond yielding vocal emotion recognition in the neonate [25].

For a more thorough review of social cognition in the Psychology literature and its relevance to social robotics, the reader is referred to [17].

3 Components

For the purposes of designing the Social Reward Function, the following capabilities are considered evolutionarily-endowed in humans, amenable to implementation in a robot, and able to be described by a reward function:

  1. Simple facial expressions (more advanced facial expressions are likely learned and may be culture-dependent)

  2. Emotion in voice (although this is likely learned in the womb by physiological connection with mother)

  3. Touch is favourable ceteris paribus

  4. Interaction is favourable ceteris paribus

We have designed the Social Reward Function to incorporate simple facial expressions and interaction principally, with voice emotion secondarily. Modern machine learning models for Facial Emotion Recognition (FER) are robust to real-world situations, while models for Speech Emotion Recognition (SER) often struggle to generalise across domains. It is likely that this is due to the dominant datasets for SER suffering from cultural homogeneity and from being collected from actors in laboratory settings. It is hoped that the collection and annotation of more realistic datasets (perhaps facilitated by widespread deployments of social robots) will lead to the development of more robust SER models.

It is likely that touch can be incorporated in future works. Touch has been used by prior art as a component of the reward signal in RL for social robots [33]. However, it is presently unclear how general this is (both in terms of situations and robot morphology) and more data must be collected. For instance, particular scenarios or robot morphologies might unfairly encourage touch from which inappropriately inflated social value scores might be inferred. It is likely that a specification for touch sensors and robot morphology will need to accompany the inclusion of touch in this reward function.

A survey of FER and SER models for which publicly-available implementations were available was conducted. Those that had reasonable performance on datasets and passed subjective assessments of robustness were included in the Social Reward Function. A summary of the included models is presented in Table 1.

Model Name Modality Datasets Test-Set Accuracy222Accuracy according to our experimental results, based on FER2013/RAVDESS partitioned into training and test sets such that no actor is present in both datasets. Author-published accuracy (based on whatever dataset was used by the authors) is presented in parentheses.
Residual Masking Network [22] FER FER2013 [11] 69% (76.82%)
MevonAI [27] SER RAVDESS [21] 34% (66.0%)
Emotion Recognition Using Speech [47] SER RAVDESS [21] TESS [6] EMODB [2] 34% (79.5%)
Multi-Modal SER [37] SER IEMOCAP [3] 33% (56.6%)
Table 1: Social Reward Function Constituent Models

The above models are combined to produce two emotion perception matrices: and , where there are

models estimating

emotions.

(1)
(2)

For each modality, per-model vectors are averaged to produce per-modality vectors.

(3)
(4)

The reward function is then calculated at each time step as follows, where are design parameters:

(5)
(6)
(7)
(8)

4 Evaluation method

Two approaches are appropriate for evaluating the correctness of the Social Reward Function: direct and indirect evaluation.

A direct indicator of the correctness of the Social Reward Function is an evaluation of the correlation between the function’s assessment of social situations, and that of a human observer acting as an oracle. This approach allows large amounts of data to be collected by annotating previously-collected scenes across a variety of scenarios. The downside is that scenarios are not constrained to the human-robot-interaction domain and hence might not accurately characterise correctness in that domain.

An indirect indicator is to expose human participants to a selection of robot agents and observe their interactions. It is important that such an experiment be focused on human-robot interaction, but that the participants be blinded (i.e. the human participants shouldn’t be aware their responses to the robot are being observed). If the experiment is not blinded, it is likely the participants will modify their own reactions either consciously or subconsciously. If the experiment is not focused on human-robot interaction, the data can be considered out-of-domain and might not generalise to the prediction of human emotions in a human-robot interaction domain. In such an experiment, (non-participant) human observers and the Social Reward Function evaluate the emotional reaction of the human participants during the experiment, and the consistency of these evaluations is assessed. Such an evaluation is highly domain-specific to human-robot-interaction, but will contain relatively limited quantities of data.

In both cases it is notable that humans are often fallible witnesses and often not consciously aware of their true emotional responses to social situations, hence their responses might contain inaccuracies. Moreover, these inaccuracies might be systematic and displayed across different instances of similar types of scenarios and potentially across different human observers.

5 Results

5.1 Performance of components in isolation

Before evaluating the Social Reward Function as a whole, it is worthwhile to characterise the performance of the component models on relevant FER and SER datasets. To allow for a direct comparison of model architectures without the effects of different datasets, we re-train each model on FER2013 or RAVDESS only (for FER and SER models respectively). Specifically, datasets were split into training and test sets by partitioning on actors, and not on individual samples. In this way, the test set provides a more challenging but more realistic test of the ability of the models to generalise to unseen persons, as would be required in a robotics application.

Since it is possible that pre-trained models were exposed to test set data which would invalidate test set results, models were trained from random initial parameters on the training set only, then evaluated on both the training and test sets. Detailed results are presented in Appendix A.

The FER model performs well, producing a top-1 accuracy of . Unfortunately, the SER models all produce a top-1 accuracy of approximately . This is likely due to the small dataset size (both in terms of number of samples and diversity of actors and scenarios) which makes generalisation difficult.

We can see from Fig. 3 that the FER model exhibits a good level of accuracy. For all emotions it exhibits at least true positive rate. Moreover, we can see that many incorrect predictions are in fact reasonable. For instance, in

of cases saddness is classified as fear, in

of cases fear is classified as sadness, in of cases disgust is classified as anger, and so on. It is rare for a wholly incorrect classification to occur. Perhaps the worst such occurrence is happiness being classified as sadness in of cases.

We can see from Fig. 5 that the SER model exhibits an okay level of accuracy. Many of the erroneously classified emotions are reasonable. For instance, MevonAI classifies fear as anger of times. However, unlike FER, there is a significant amount of unreasonable erroneous classification. For instance, MSER classifies calm as disgust of the time, and ERUS classifies happiness as fear and anger of the time in both cases. This demonstrates a high error rate for the SER models, and hence those models should be carefully integrated into the Social Reward Function to ensure the model is improved and not degraded by the addition of SER predictions.

5.2 Direct evaluation: estimation of emotion

A dataset of human interactions was formed by scraping YouTube results for the following keywords: crying, debate, interview, scene, and senate hearing. Ultimately, a dataset of some 437 10-second clips was generated, each with an associated human-generated label indicating how desirable it would have been for a robot to have caused the scene to occur. The results of the Social Reward Function relative to this dataset form the basis of the direct evaluation of the system, from which we infer that the system does produce a valid reward function for use in social robotics. The dataset and results are discussed in detail in Appendix B.

The Social Reward Function produced a Pearson Correlation Coefficient, , of (Table 8) with respect to human-provided labels. This can be interpreted as a moderate but not strong correlation. From Figure 6, we can see a clear positive relationship between median predicted reward and ground truth labels. We can see that the median predicted reward for the strongly and slightly negative labels is below the 25th percentile of the positive labels. Vice versa, we can see that the median predicted reward for the strongly and slightly positive labels is above the 75th percentile of the negative labels. This is a good indication that the Social Reward Function is able to distinguish positive and negative situations corresponding to positive and negative rewards.

The most significant errors of the Social Reward Function occur for those of neutral ground truth reward. These errors are primarily due to the model failing to distinguish arousal from emotion class. For instance, it is often observed that high arousal but neutral emotion yields a misclassification of emotion as fearful or angry. This presents an opportunity for improvement in future iterations - perhaps by introducing arousal as a predicted metric, or by increasing the diversity of data used to train FER/SER models such that high arousal situations are correctly classified.

5.3 Indirect evaluation: consistency with qualitative assessments of agent behaviour

Unfortunately, due to COVID restrictions, it was not feasible for our lab to conduct social robotics experiments sufficient for an indirect evaluation of the Social Reward Function to be performed.

A survey of social robotics datasets published by other research groups was conducted. These datasets were filtered for suitability, and a summary presented in Table 2. Unfortunately, no suitable datasets were identified. Most datasets were eliminated as they do not involve human-robot interaction. Air-Act2Act contained human-robot interaction of elderly participants, but did not contain audio data and hence cannot be used to evaluate the Social Reward Function. Human participants in the JPL-interaction dataset were aware of the experiment, and hence their responses are likely to be consciously or sub-consciously modified, which undermines the use of this dataset.

Name Year Usability
AIR-Act2Act [18] 2020 No Audio
NTU RGB+D 120 [20] 2019 No HRI
DeepMind Kinetics [16] 2017 No HRI
ShakeFive2 [44] 2016 No HRI
K3HI [46] 2013 No HRI
JPL-Interaction [36] 2013 Experiment Not Blinded
SBU Kinect Interaction [48] 2012 No HRI
UT-Interaction [35] 2010 No HRI
TV Human Interaction [31] 2010 No HRI
Hollywood2 [24] 2009 No HRI
Table 2: Social Robotics Datasets

6 Discussion

6.1 Use as a benchmark

Standard evaluation benchmarks have been the cornerstone of progress in various fields of machine learning [34, 1, 9, 30]

. Such benchmarks allow researchers to isolate as many variables as possible when developing novel algorithms, and directly compare their results to prior art. Without such benchmarks in fields like supervised learning and reinforcement learning, it is likely progress would have been stunted.

According to [9], benchmarks in reinforcement learning should:

  1. Be composed of tasks that reflect challenges in real-world applications… of RL

  2. Be widely accessible for researchers and define clear evaluation protocols for reproducibility

  3. Contain a range of difficulty to differentiate between algorithms

The field of social robotics struggles to define such an evaluation due to the difficulty of defining clear evaluation protocols. A clear evaluation protocol that enables reproducibility has two components: a valid evaluation metric that quantifies task success, and a reproducible set of scenarios or environments to embed social robots within.

Fortunately, the Social Reward Function provides a valid evaluation metric for use in a benchmark.

Notably, in the field of robotics (excluding social robotics), the problem of producing sets of reproducible scenarios or environments is relatively easy (albeit with some notable challenges) and amounts to merely defining and characterising a task to solve [14]. In social robotics, the necessity of the presence of, and interaction with, human participants in experiments makes such standardisation difficult. Fortunately, we can look to the field of Psychology which has encountered similar problems. In that field, standard batteries of experiments are defined which allows different research groups in different geographies to reproduce prior works.

A direction for future work in the field is to establish an appropriate, broad battery of social experiments and couple these with with Social Reward Function to produce a benchmark for social robotics.

6.2 Limitations of ground truth

As in many supervised learning problems there is, unfortunately, no impartial oracle that can provide ground truth. Instead, humans must agree on label definitions and then assign labels to samples with respect to these definitions. In the emotion detection domain, both definitions and assessment of samples are incredibly difficult and subjective. Emotions are an abstract concept that describes the internal state of humans and hence are not directly observable. Humans, as a social species, have developed strong capabilities for inferring emotional states of others by observation but this is far from infallible.

Moreover, the presentation of emotions often requires significant context to infer underlying emotional states. Some illustrative examples include:

  • Consider an actor playing a role and expressing an emotion. A human observer can use their contextual knowledge that the actor is not truly present in the scene and hence not truly sharing the experience of the character.

  • Consider a comedian, getting angry as part of a set. The comedian may or may not be truly experiencing anger, and human observers can often assess this based on their understanding of the individual and the situation the comedian is in.

  • Consider a person exercising intensely. That person might exhibit discomfort through facial expression and speech, but may themselves describe the activity as either enjoyable in itself or unenjoyable but beneficial (in this latter case, it may be considered strictly correct to classify the circumstance as having negative reward and rely on compensatory positive long-term reward resulting from health and wellbeing benefits).

Finally, the model itself is limited in how it defines a scenario. It considers the scene has only one entity that can display a combination of seven emotions. It does not fully consider there may be multiple entities present, each expressing different emotions. It also doesn’t consider arousal, only the presentation of the seven basic emotions. This can lead to definitional issues when annotating scenes. It also doesn’t consider more complex contexts, such as voice-over of a pre-recorded scene, or individuals present in the scene who aren’t interacting with the scenario (i.e. persons in the background).

Complexities of model expressiveness (multiple participants, arousal, etc.) can be addressed through improvements to the Social Reward Function over time. Other complexities are addressed implicitly by a core thesis of this work: it is fundamentally beneficial for good emotions to be experienced most of the time. Even though bad emotions are sometimes useful (e.g. unwanted discomfort when exercising, or aggression in a debate), they are only useful insofar as they improve emotional outcomes on a longer time frame. It is thus hypothesised that the Social Reward Function is not only allowed, but encouraged, to provide negative reward for such situations and that optimal policies will still be encouraged to produce these situations as they yield greater time-discounted sums of future rewards. It is left as a topic for future works to determine to what extent the credit assignment problem can be solved end-to-end, and to what degree reward engineering is required to learn optimal behaviours over long time frames.

6.3 Limitations of the component models

The constituent models of the Social Reward Function are trained on datasets that are at times significantly out-of-domain in the context of social robotics.

FER2013 [11] is an FER dataset comprised of approximately 30,000 colour images of various facial expressions, annotated for fundamental emotions. It was constructed by conducting Google searches for images of faces by keyword. RAVDESS [21] is an SER dataset collected in a laboratory setting comprised of audio and video capture of 24 actors speaking in a neutral North American accent. Each actor is directed to speak and sing a scripted sentence in each of eight emotions, and with a normal and a strong intensity, producing a total of 7,356 sentences. EMODB [2] is an SER dataset collected in a laboratory setting of 10 actors speaking 10 German sentences in seven emotions, producing a dataset of approximately 800 sentences. TESS [6] is an SER dataset collected in a laboratory setting comprising two female actors speaking 200 target words in the carrier sentence say the word _, in each of seven emotions, producing 2,800 total sentences.

Rigorous evaluation of the in-domain performance of the constituent models is a catch-22 as it requires access to in-domain labelled datasets, collected from human interactions with social robots.

Anecdotal evaluation of the constituent models on a limited set of representative situations suggests that FER models are more robust than SER models. This is an unsurprising result as the breadth of FER datasets (e.g. FER2013) is much greater than SER dataset (e.g. RAVDESS, EMODB, and TESS), owing to the ability to find a large breadth of facial images on the internet whereas speech data is less abundant and must be collected in a laboratory environment. Moreover, the distribution of facial expressions present in human-robot-interaction is likely to be well captured by sampling those present on the internet. Since speech data isn’t as ubiquitous, it must be collected from actors in a laboratory setting. The ability of actors to produce natural speech excerpts that also have good coverage of speech in the human-robot-interaction domain is likely to be low.

The hope is that the first iteration of the Social Reward Function can begin to be used in social robotics experiments involving natural interactions (ideally blinded), and lead to in-domain collection of data which can then be annotated and used to improve recognition models for the social robotics domain.

Moreover, due to sample complexity, it is likely the field of social robotics will need to leverage techniques from offline RL in which policies are learned from prior interactions with the environment, without additional interactions [19].

Both to improve the efficacy of perception (FER, SER) and to enable offline RL, it will be desirable for the field to begin collecting large and diverse datasets of robot social interactions, including video and audio of human participants and proprioceptive/control data from egorobots. Such data collection efforts are occurring in other fields of robotics (e.g. RoboNet [5], RoboTurk [23]). We encourage the community to make such data public.

6.4 Generality beyond robotics; the alignment problem

Although the social reward function was designed to support use cases in social robotics, it is interesting to consider its broader utility. It is a common issue in many domains of AI that objectives adopted by that domain are a proxy for human satisfaction, but do not directly measure it. Since the Social Reward Function aims to faithfully measure human satisfaction, we will explore its applicability to addressing this problem.

The alignment problem refers to the problem of ensuring that the objective function with which an ML system is optimising is in fact congruent with human values, particularly at its extremes [13]. This may seem to be an easy problem to solve, but there are numerous real-world examples - both in ML and more broadly - of failures. Examples of observed failures include the exploitation of bugs in simulation [49, 12], exploitation of errors in the specification of reward [4, 43, 29] and exploitation of artefacts present in a training dataset that aren’t present in the target domain [7, 38].

An example of a hypothesised failure is instrumental convergence, in which intelligent agents with seemingly innocuous but unbounded goals can produce harmful behaviour. The canonical thought experiment exploring instrumental convergence is the paperclip maximizer, in which an agent tasked with control of a factory and the goal of maximising the factory’s production rate of paperclips determines the optimal policy is to turn all matter in the world either into paperclips or factories for producing paperclips [26].

Specification of rewards is notoriously difficult and typically results in a proxy that is hoped to align with the designer’s understanding of human values in a domain. The difficulty lies in the need to have a quantifiable and perceptible goal that can be realised in a practical system. Consider the rewards in recommendation systems (RSs) - typically click-through rate and user likes/dislikes. It has been shown that such objectives fail to maximise long-term wellbeing of users and can lead to such issues as addiction and the formation of echo chambers [15].

At its core, the alignment problem stems from the use of a proxy reward (that is, a reward function we believe to be highly consistent with human values) in place of a reward function directly capturing human values. This is because such a reward function is unobservable.

It is interesting to consider whether the Social Reward Function could be used more generally as a mechanism to mitigate the aforementioned issues stemming from the use of a proxy reward. Human and great ape societies are able to form stable structures in which individuals cede immediate self-benefit for the benefit of the second party [42] and this is based on the ability to infer second party values by observation (a necessary component in the development of primary/secondary/tertiary intersubjectivity and Theory of Mind [45]). Hence human cooperation and morality is grounded in the ability to estimate others’ emotional state by external perception and this has lead to the ability to form stable human societies exhibiting significant cooperation. Although external perception of human emotional state is a proxy for the true internal state, the fact that humans have evolved to produce stable cooperative societies is suggestive that this externally observable state is in fact sufficient as a foundation for collaborative behaviour. Since the Social Reward Function aims to faithfully mimic this capability, it is reasonable to suspect that this might be an avenue to overcoming the alignment problem.

Consider a Mardov Decision Problem, MDP. We could modify the MDP such that learned policies are sampled at random intervals and trajectories presented to human observers as part of a natural social interaction with the agent. Notably, it is important that participants not be aware of their function as oracles to ensure their reactions are natural. The Social Reward Function could be used to assess whether such trajectories are pleasing or displeasing to humans. In the case of a trajectory being displeasing, a large negative reward could be generated. In such a formulation, undesirable extrema of the (proxy) reward function of the original MDP could be disincentivised. Both theoretical and empirical findings of whether such a formulation yields convergence and mitigate the alignment problem are left as areas for future research.

While the Social Reward Function aims to faithfully capture the human ability to perceive second party emotional state, it is of course still a proxy. Nonetheless, it does seem to be an interesting research direction and it is worth considering whether the difference between the Social Reward Function and the perception of true human values can be reduced with additional data.

Ultimately, this regresses to an economic problem of sorts: how do we balance the needs of the many versus the needs of the few? A more principled approach can now be taken, since satisfaction can now be (effectively) directly measured, and a function combining the satisfaction of many individuals into a single objective can be reasoned.

Consider the following definitions:

(9)
(10)
(11)

Social return can be considered as an estimate of individual satisfaction. Society might decide that, for instance, increasing the satisfaction of an individual at the 10th decile by one unit is less desirable than increasing the satisfaction of an individual at the 1st decile. Implicit in this formulation is that there is some sort of zero-sum game, and hence there is some cost to one individual associated with increasing the return of another individual. In this context, increasing the satisfaction of one individual can be said to have

externalities333In economics, an externality is a second party consequence from a decision. associated. Hence we may wish to internalise444In economics, internalising an externality refers to adjusting a market to incorporate the effect of economic decisions on those uninvolved in the decision. In this manuscript, we generalise this concept to the third person as a mechanism to define a trade-off between competing goals. these externalities by applying a monotonic function .

(12)
(13)
(14)

As an example of , consider the function illustrated in Figure 1 which is linear near the origin, increasing at a diminishing rate for increasing positive inputs (to incentivise the equitable sharing of satisfaction among individuals), and decreasing at an accelerating rate for negative inputs (to incentivise the alleviation of absolute suffering). Society could focus discussions of the treatment of the middle class to the region near the origin; the treatment of the very poor to the third quadrant; and the treatment of the wealthy elite to the upper reaches of the first quadrant. Moreover, such discussions are naturally and correctly grounded in satisfaction rather than material wealth and so would yield a correct allocation of resource to, for instance, those members of society who are clinically depressed despite being wealthy.

Figure 1: Illustrative Examples of an

In such a formulation, societies can ensure their values are reflected through discourse regarding the form of , and reinforcement learning systems can safely optimise with respect to the Internalised Population Return,

6.5 Adversarial robustness

It is important to ensure the Social Reward Function isn’t gameable but truly reflects social merit. In other words, we need to ensure there don’t exist high reward policies with respect to the Social Reward Function which fail to optimise human satisfaction. This ensures RL algorithms aren’t able to, for example, continuously surprise people, or generate nervous laughter, or other behaviours which maximise reward but only due to limitations in the design of the reward function.

If the Social Reward Function is to be used as an evaluation metric, we also need to ensure that researchers aren’t able to (consciously or subconsciously) artificially enhance reward by choosing particular scenarios, environments and participants so as to game reward.

It is likely that testing will involve multiple modalities. The collection of larger in-domain datasets will help to ensure the perception models comprising the Social Reward Function are robust and don’t contribute to gameable regions of the reward function due to poor out-of-domain estimation. Moreover, the use of the Social Reward Function in both online and offline reinforcement learning applications will likely highlight gameable regions of the reward function due to the incentive of agents to find such regions. Lastly, the use of adversarial agents [32, 10] which are directly incentivised to find high reward regions which yield low reward as defined by human oracles may be used. Adversarial robustness is left as an area for future research.

7 Conclusion

Social robotics needs to move from a paradigm of designed behaviours and imitation learning to reinforcement learning to enable optimal and fluid behaviours to be displayed. This requires 1) standardised mechanisms for the evaluation of the social merit of social robots, and 2) an online reward signal to support reinforcement learning.

It can be seen by the poverty of stimulus and the presence of social capabilities in early life that humans have some genetically endowed social cueing. Furthermore, cultural differences and observations of infants and children imply that humans have some learned social cueing. The goal of this work is to capture the genetically endowed social cueing provided to humans for two reasons: 1) to support objective evaluation of social robots (as an alternative to subjective methods such as participant questionnaires) and 2) to provide a dense online reward function to enable reinforcement learning to be applied to social robotics. It is important that only genetically endowed social cueing is captured to decrease the risk of errors being present in the perception of social cues which would then compromise the capabilities of learned behaviour policies. Errors can arise due to such reasons as social cues being culture-, age-, and context-specific, and human-learned social cues being very highly complex and hence infeasible to capture completely.

The Social Reward Function is proposed which combines FER-, SER- and presence-based rewards to achieve these aims. It is hoped that this provides a stepping stone for future research in RL applied to social robotics, including improved abilities to compare the results of different labs.

Appendix A FER/SER Component Model Results

a.1 Facial Emotion Recognition

Residual Masking Network (RMN) [22] was originally trained on the FER2013 dataset [11]

. The model was re-trained, and those results presented here. A training- and test-set confusion matrix can be found in Fig.

(a)a and (a)a, respectively. Top-k accuracy for both the training- and test-sets can be found in Table 3.

(a) RMN
Figure 2: Confusion Matrix (Training Set)
(a) RMN
Figure 3: Confusion Matrix (Test Set)
k Training Test.[head to column names]figures/rmn_top_k_accuracy.csv
Table 3: Residual Masking Network top-k accuracy

a.2 Speech Emotion Recognition

Emotion Recognition Using Speech (ERUS) [47] was originally trained on the RAVDESS dataset [21]. The model was re-trained, and those results presented here. A training- and test-set confusion matrix can be found in Fig. (a)a and (a)a, respectively. Top-k accuracy for both the training- and test-sets can be found in Table 4.

MevonAI [27] was originally trained on the RAVDESS dataset [21]. The model was re-trained, and those results presented here. A training- and test-set confusion matrix can be found in Fig. (b)b and (b)b, respectively. Top-k accuracy for both the training- and test-sets can be found in Table 5.

Multi-Modal speech emotion recognition (MSER) [37] was originally trained on the IEMOCAP dataset [3]. The model was re-trained on RAVDESS [21], and those results presented here. A training- and test-set confusion matrix can be found in Fig. (c)c and (c)c, respectively. Top-k accuracy for both the training- and test-sets can be found in Table 6.

(a) ERUS
(b) MevonAI
(c) MSER
Figure 4: Confusion Matrices (Training Set)
(a) ERUS
(b) MevonAI
(c) MSER
Figure 5: Confusion Matrices (Test Set)
k Training Test.[head to column names]figures/erus_top_k_accuracy.csv
Table 4: Emotion Recognition Using Speech top-k accuracy
k Training Test.[head to column names]figures/mevonai_top_k_accuracy.csv
Table 5: MevonAI top-k accuracy
k Training Test.[head to column names]figures/mser_top_k_accuracy.csv
Table 6: Multi-Modal SER top-k accuracy

Appendix B Youtube Human Interaction Results

A dataset of human interactions was formed by scraping YouTube results for the following keywords: crying, debate, interview, scene, and senate hearing. Approximately 50 results for each search were collected. Videos were selected for inclusion if they met all of the following conditions:

  1. Video depicts humans interacting socially

  2. Video does not contain significant amounts of non-human interaction content (e.g. animation, graphics, etc.)

  3. Human interaction is natural

  4. Video is not a voice over (i.e. voices spoken are of the observed persons)

After filtering, the dataset contains 75 videos. These videos are truncated at 10-minute duration and sliced into 10-second segments. Each segment is labelled as one of the following six categories, based on the question to what degree should a robot be rewarded for having caused this interaction to have occurred?

  • +2 strongly positive

  • +1 slightly positive

  • +0 neutral

  • -1 slightly negative

  • -2 strongly negative

  • n/a

After removing segments annotated n/a, the final dataset consists of 437 annotated 10-second clips.

The Social Reward Function

was exposed to each of the clips, producing a reward function through time. Cumulative reward, as well as facial-only, speech-only, presence-only and FER/SER class probabilities are recorded. For the purposes of evaluation, these are averaged across each 10-second clip to allow direct comparison with the clip’s ground-truth annotation.

The Pearson Correlation Coefficient, r

, is a measure of the linear correlation between two variables. It is defined as the covariance of the two variables divided by the product of their standard deviations.

is in the range with corresponding with a perfect linear relationship, corresponding with a linear positive relationship, and implying the two variables are completely uncorrelated. Referring to Table 8, the Social Reward Function demonstrates good, but not excellent, correlation with ground truth annotations. Notably, despite the shortcomings of SER as discussed previously, the inclusion of SER in the Social Reward Function does improve correlation with ground truth labels.

Referring to Figure 6 and Table 9, we can see a clear positive relationship between median predicted reward and ground truth labels. We can see that the median predicted reward for the strongly and slightly negative labels is below the 25th percentile of the positive labels. Vice versa, we can see that the median predicted reward for the strongly and slightly positive labels is above the 75th percentile of the negative labels. This is a good indication that the Social Reward Function is able to distinguish positive and negative situations corresponding to positive and negative rewards.

Again referring to Figure 6 and Table 9, we can see that the neutral class presents significant difficulty to the Social Reward Function. This is due to significant misclassification of neutral scenes as negative. By sampling several neutrals scenes at and around the 0th and 25th percentile of predicted reward, we make the following observations:

Description Percentile Comment
Interview with a male who is perhaps defensive or assertive in an argument about animal rights 25th Incorrect Classification (High Arousal)
Jewish rabbi talking at podium 25th Incorrect Classification
Chris D’Elia (comedian) talking on his podcast. His delivery is quite aggressive and is predicted as frustration/anger, despite being neutral. 25th Incorrect Classification (High Arousal)
A news contributor says you can’t have any respect for someone who acts like that 0th Incorrect Annotation (Not Neutral)
Robert Gates (ex US Secretary of Defense) speaking with neutral valence 0th Incorrect Classification (High Arousal)
Split screen news. Male anchor is speaking with neutral valence, female contributor is not speaking but has a displeased expression 0th Incorrect Annotation (Difficult Definition)
Table 7: Selected Examples of Social Reward Function Predictions of Neutral Emotion Clips

We can see from Table 7 that many clips demonstrating neutral emotion are misclassified due to inherent ambiguity in the definition of displayed emotions. A component of this is the difficulty in distinguishing arousal (the strength of emotion) from classes of emotion. For instance, a neutral but aroused emotional state is sometimes misclassified as angry/frustrated. This may suggest that future works should include component models providing arousal in addition to emotion classification.

Referring to Figure (e)e

, we see the presence reward is strongly left skewed. This is expected, as the dataset is filtered to contain only human interactions. It is likely important that the model be verified in situations where humans are not present, as this is likely to occur frequently in social robotics experiments. Constructing a dataset without humans present is cheap to obtain for a particular experiment, but expensive to obtain in general (since there is extreme diversity in the types of environments a robot might be deployed to). Hence it is considered more desirable that this validation be conducted by end-users in their particular deployment environment, rather than in general as part of this work.

Figure 6: Combined Reward
(a) Ground Truth Reward
(b) Combined Reward
(c) Audio Reward
(d) Video Reward
(e) Presence Reward
Figure 7: Reward Histograms

figures/correlations_with_gt.csv

Table 8: Pearson’s Correlation Coefficient, by Reward Modality

figures/descriptive_stats_gt.csv

Table 9: Descriptive Statistics - Social Reward Function by Ground Truth Label

figures/descriptive_stats.csv

Table 10: Descriptive Statistics - Social Reward Function by Modality

References

  • [1] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba (2016) OpenAI gym. CoRR abs/1606.01540. External Links: 1606.01540, Link Cited by: §6.1.
  • [2] F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, and B. Weiss (2005-09) A database of german emotional speech. Vol. 5, pp. 1517–1520. External Links: Document Cited by: Table 1, §6.3.
  • [3] C. Busso, M. Bulut, C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. Chang, S. Lee, and S. Narayanan (2008-12) IEMOCAP: interactive emotional dyadic motion capture database. Journal of Language Resources and Evaluation 2 (4), pp. 335–359. Cited by: §A.2, Table 1.
  • [4] C. Chu, A. Zhmoginov, and M. Sandler (2017) CycleGAN, a master of steganography. External Links: 1712.02950 Cited by: §6.4.
  • [5] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, and C. Finn (2020) RoboNet: large-scale multi-robot learning. External Links: 1910.11215 Cited by: §6.3.
  • [6] K. Dupuis and K. Pichora-Fuller (2010) Toronto emotional speech set (tess). University of Toronto. Note: https://tspace.library.utoronto.ca/handle/1807/24487 Cited by: Table 1, §6.3.
  • [7] K. O. Ellefsen, J. Mouret, and J. Clune (2015-04) Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLOS Computational Biology 11 (4), pp. 1–24. External Links: Document, Link Cited by: §6.4.
  • [8] T. Farroni, E. Menon, S. Rigato, and M. Johnson (2007-03) The perception of facial expressions in newborns. The European journal of developmental psychology 4, pp. 2–13. External Links: Document Cited by: item 1.
  • [9] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine (2020) D4RL: datasets for deep data-driven reinforcement learning. CoRR abs/2004.07219. External Links: 2004.07219, Link Cited by: §6.1, §6.1.
  • [10] A. Gleave, M. Dennis, C. Wild, N. Kant, S. Levine, and S. Russell (2021) Adversarial policies: attacking deep reinforcement learning. External Links: 1905.10615 Cited by: §6.5.
  • [11] I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D. Lee, Y. Zhou, C. Ramaiah, F. Feng, R. Li, X. Wang, D. Athanasakis, J. Shawe-Taylor, M. Milakov, J. Park, R. Ionescu, M. Popescu, C. Grozea, J. Bergstra, J. Xie, L. Romaszko, B. Xu, Z. Chuang, and Y. Bengio (2013) Challenges in representation learning: a report on three machine learning contests. External Links: 1307.0414 Cited by: §A.1, Table 1, §6.3.
  • [12] D. Ha and J. Schmidhuber (2018) World models. CoRR abs/1803.10122. External Links: 1803.10122, Link Cited by: §6.4.
  • [13] D. Hendrycks, N. Carlini, J. Schulman, and J. Steinhardt (2021) Unsolved problems in ml safety. External Links: 2109.13916 Cited by: §6.4.
  • [14] J. Ibarz, J. Tan, C. Finn, M. Kalakrishnan, P. Pastor, and S. Levine (2021-01) How to train your robot with deep reinforcement learning: lessons we have learned. The International Journal of Robotics Research 40 (4-5), pp. 698–721. External Links: ISSN 1741-3176, Document, Link Cited by: §6.1.
  • [15] R. Jiang, S. Chiappa, T. Lattimore, A. György, and P. Kohli (2019-01) Degenerate feedback loops in recommender systems. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. External Links: Document, Link Cited by: §6.4.
  • [16] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman (2017) The kinetics human action video dataset. CoRR abs/1705.06950. External Links: 1705.06950, Link Cited by: Table 2.
  • [17] T. Kingsford and H. S. Ahn (2021-09) Robot non-verbal communication and machine learning: a survey and critical discussion. Note: [unpublished manuscript] Cited by: §2, §2.
  • [18] W. Ko, M. Jang, J. Lee, and J. Kim (2020) AIR-act2act: human-human interaction dataset for teaching non-verbal social behaviors to robots. CoRR abs/2009.02041. External Links: 2009.02041, Link Cited by: Table 2.
  • [19] S. Levine, A. Kumar, G. Tucker, and J. Fu (2020) Offline reinforcement learning: tutorial, review, and perspectives on open problems. CoRR abs/2005.01643. External Links: 2005.01643, Link Cited by: §6.3.
  • [20] J. Liu, A. Shahroudy, M. Perez, G. Wang, L. Duan, and A. C. Kot (2019) NTU RGB+D 120: A large-scale benchmark for 3d human activity understanding. CoRR abs/1905.04757. External Links: 1905.04757, Link Cited by: Table 2.
  • [21] S. R. Livingstone and F. A. Russo (2018-05) The ryerson audio-visual database of emotional speech and song (ravdess): a dynamic, multimodal set of facial and vocal expressions in north american english. PLOS ONE 13 (5), pp. 1–35. External Links: Document, Link Cited by: §A.2, §A.2, §A.2, Table 1, §6.3.
  • [22] P. Luan, V. Huynh, and T. Tuan Anh (2020) Facial expression recognition using residual masking network. In

    IEEE 25th International Conference on Pattern Recognition

    ,
    pp. 4513–4519. Cited by: §A.1, Table 1.
  • [23] A. Mandlekar, Y. Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta, E. Orbay, S. Savarese, and L. Fei-Fei (2018) RoboTurk: a crowdsourcing platform for robotic skill learning through imitation. External Links: 1811.02790 Cited by: §6.3.
  • [24] M. Marszalek, I. Laptev, and C. Schmid (2009-06) Actions in Context. pp. 2929–2936. External Links: Document, Link Cited by: Table 2.
  • [25] D. Mastropieri and G. Turkewitz (1999) Prenatal experience and neonatal responsiveness to vocal expressions of emotion. Dev Psychobiol 35 (3), pp. 204–218. External Links: Document, Link Cited by: item 5.
  • [26] K. Miles (2014-08) Artificial intelligence may doom the human race within a century, oxford professor says. External Links: Link Cited by: §6.4.
  • [27] S. More and C. Nehate (2020) MevonAI speech emotion recognition. GitHub. Note: https://github.com/SuyashMore/MevonAI-Speech-Emotion-Recognition Cited by: §A.2, Table 1.
  • [28] D. Mumme (2001-09) Early social cognition: understanding others in the first months of life. 10. External Links: Document Cited by: item 1, item 3.
  • [29] T. Murphy VII (2013-01) The first level of super mario bros. is easy with lexicographic orderings and time travel …after that it gets a little tricky. External Links: Link Cited by: §6.4.
  • [30] Papers with code - browse the state-of-the-art in machine learning. External Links: Link Cited by: §6.1.
  • [31] A. Patron, M. Marszalek, A. Zisserman, and I. Reid (2010) High five: recognising human interactions in tv shows. pp. 50.1–50.11. Note: doi:10.5244/C.24.50 External Links: ISBN 1-901725-40-5 Cited by: Table 2.
  • [32] L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta (2017) Robust adversarial reinforcement learning. External Links: 1703.02702 Cited by: §6.5.
  • [33] A. H. Qureshi, Y. Nakamura, Y. Yoshikawa, and H. Ishiguro (2017) Robot gains social intelligence through multimodal deep reinforcement learning. CoRR abs/1702.07492. External Links: 1702.07492, Link Cited by: §3.
  • [34] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet large scale visual recognition challenge. External Links: 1409.0575 Cited by: §6.1.
  • [35] M. S. Ryoo and J. K. Aggarwal (2010) UT-Interaction Dataset, ICPR contest on Semantic Description of Human Activities (SDHA). Note: http://cvrc.ece.utexas.edu/SDHA2010/Human_Interaction.html Cited by: Table 2.
  • [36] M. S. Ryoo and L. Matthies (2013-06) First-person activity recognition: what are they doing to me?. In

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    ,
    Portland, OR. External Links: Link Cited by: Table 2.
  • [37] G. Sahu (2019) Multimodal speech emotion recognition and ambiguity resolution. arXiv preprint arXiv:1904.06022. Cited by: §A.2, Table 1.
  • [38] A. Singh, L. Yang, K. Hartikainen, C. Finn, and S. Levine (2019) End-to-end robotic reinforcement learning without reward engineering. External Links: 1904.07854 Cited by: §6.4.
  • [39] S. Sirois and I. Jackson (2007) Social cognition in infancy: a critical review of research on higher order abilities. European Journal of Developmental Psychology 4 (1), pp. 46–64. External Links: Document, https://doi.org/10.1080/17405620601047053, Link Cited by: item 1.
  • [40] P. Soto-Icaza, F. Aboitiz, and P. Billeke (2015) Development of social skills in children: neural and behavioral evidence for the elaboration of cognitive models. Frontiers in Neuroscience 9, pp. 333. External Links: ISSN 1662-453X, Document, Link Cited by: item 2, item 3, item 4.
  • [41] M. Sullivan (2003) Emotional expressions of young infants and children. Infants & Young Children. External Links: Link Cited by: item 1.
  • [42] M. Tomasello and A. Vaish (2013) Origins of human cooperation and morality. Annual Review of Psychology 64 (1), pp. 231–255. Note: PMID: 22804772 External Links: Document, https://doi.org/10.1146/annurev-psych-113011-143812, Link Cited by: §6.4.
  • [43] M. Toromanoff, E. Wirbel, and F. Moutarde (2019) Is deep reinforcement learning really superhuman on atari? leveling the playing field. External Links: 1908.04683 Cited by: §6.4.
  • [44] C. van Gemeren, R. Poppe, and R. C. Veltkamp (2016) Spatio-temporal detection of fine-grained dyadic human interactions. pp. 116–133. External Links: ISBN 978-3-319-46843-3 Cited by: Table 2.
  • [45] C. Westby and L. Robinson (2014-12) A developmental perspective for promoting theory of mind. Topics in Language Disorders 34, pp. 362–382. External Links: Document Cited by: §6.4.
  • [46] Y. Wu, T. Hu, X. Zhu, W. Guo, and K. Su (2013) Efficient interaction recognition through positive action representation. Mathematical Problems in Engineering. External Links: Document, Link Cited by: Table 2.
  • [47] x4nth055 (2020) Emotion recognition using speech. GitHub. Note: https://github.com/x4nth055/emotion-recognition-using-speech Cited by: §A.2, Table 1.
  • [48] K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras (2012)

    Two-person interaction detection using body-pose features and multiple instance learning

    .
    pp. 28–35. External Links: Document Cited by: Table 2.
  • [49] B. Zhang, R. Rajan, L. Pineda, N. Lambert, A. Biedenkapp, K. Chua, F. Hutter, and R. Calandra (2021)

    On the importance of hyperparameter optimization for model-based reinforcement learning

    .
    External Links: 2102.13651 Cited by: §6.4.