A Personalized Affective Memory Neural Model for Improving Emotion Recognition

04/23/2019 ∙ by Pablo Barros, et al. ∙ 26

Recent models of emotion recognition strongly rely on supervised deep learning solutions for the distinction of general emotion expressions. However, they are not reliable when recognizing online and personalized facial expressions, e.g., for person-specific affective understanding. In this paper, we present a neural model based on a conditional adversarial autoencoder to learn how to represent and edit general emotion expressions. We then propose Grow-When-Required networks as personalized affective memories to learn individualized aspects of emotion expressions. Our model achieves state-of-the-art performance on emotion recognition when evaluated on in-the-wild datasets. Furthermore, our experiments include ablation studies and neural visualizations in order to explain the behavior of our model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic facial expression recognition became a popular topic in the past years due to the success of deep learning techniques. What was once the realm of specialists on adapting human-made descriptors of facial representations (Zhao et al., 2003), became one of many visual recognition tasks for deep learning enthusiasts (Schmidhuber, 2015). The result is a continuous improvement in the performance of automatic emotion recognition during the last decade (Sariyanidi et al., 2015) reflecting the progress of both hardware and different deep learning solutions.

However, the performance of deep learning models for facial expression recognition has recently stagnated (Soleymani et al., 2017; Mehta et al., 2018)

. The most common cause is the high dependence of such solutions on balanced, strongly labeled and diverse data. Recent approaches propose to address these problems by introducing transfer learning techniques

(Ng et al., 2015; Kaya et al., 2017), neural activation and data distribution regularization (Ding et al., 2017b; Pons & Masip, 2018)

, and unsupervised learning of facial representations

(Zen et al., 2014; Kahou et al., 2016). Most of these models present an improvement of performance when evaluated on specific datasets. Still, most of these solutions require strongly supervised training which may bias their capacity to generalize emotional representations. Recent neural models based on adversarial learning (Kim et al., 2017; Saha et al., 2018) overcome the necessity of strongly labeled data. Such models rely on their ability to learn facial representations in an unsupervised manner and present competitive performance on emotion recognition when compared to strongly supervised models.

Furthermore, it is very hard to adapt end-to-end deep learning-based models to different emotion recognition scenarios, in particular, real-world applications, mostly due to very costly training processes. As soon as these models need to learn a novel emotion representation, they must be re-trained or re-designed. Such problems occur due to understanding facial expression recognition as a typical computer vision task instead of adapting the solutions to specific characteristics of facial expressions

(Adolphs, 2002).

We address the problem of learning adaptable emotional representations by focusing on improving emotion expression recognition based on two facial perception characteristics: the diversity of human emotional expressions and the personalization of affective understanding. A person can express happiness by smiling, laughing and/or by moving the eyes depending on who he or she is talking to, for example (Mayer et al., 1990). The diversity of emotion expressions becomes even more complex when we take into consideration inter-personal characteristics such as cultural background, intrinsic mood, personality, and even genetics (Russell, 2017). Besides diversity, the adaptation to learn personalized expressions is also very important. When humans already know a specific person they can adapt their own perception to how that specific person expresses emotions (Hamann & Canli, 2004). The interplay between recognizing an individual’s facial characteristics, i.e. how they move certain muscles in their face, mouth and eye positions, blushing intensity, and clustering them into emotional states is the key for modeling a human-like emotion recognition system (Sprengelmeyer et al., 1998).

The problem of learning diversity on facial expressions was addressed with the recent development of what is known as in-the-wild emotion expression datasets (Zadeh et al., 2016; Mollahosseini et al., 2017; Dhall et al., 2018). These corpora make use of a large amount of data to increase the variability of emotion representations. Although deep learning models trained with these corpora improved the performance on different emotion recognition tasks (de Bittencourt Zavan et al., 2017; Kollias & Zafeiriou, 2018), they still suffer from the lack of adaptability to personalized expressions.

Unsupervised clustering and dynamic adaptation were explored for solving the personalization of emotion expression problems (Chen et al., 2014; Zen et al., 2014; Valenza et al., 2014). Most of the recent solutions use a collection of different emotion stimuli (EEG, visual and auditory, for example) and neural architectures, but the principle is the same: to train a recognition model with data from one individual person to improve affective representation. Although performing well in specific benchmark evaluations, these models are not able to adapt to online and continuous learning scenarios found on real-world applications.

The affective memory model (Barros & Wermter, 2017)

addresses the problem of continuous adaptation by proposing a Growing-When-Required (GWR) network to learn emotional representation clusters from one specific person. One affective memory is created per person and it is populated with representations from the last layer of a convolutional neural network (CNN). This model creates prototypical neurons that represent each of the perceived facial expressions. Although improving emotion expression recognition when compared to state-of-the-art models, this model is not reliable on online scenarios. Prototype neurons will be formed only after a certain expression is perceived, and it relies heavily on a supervised CNN, so the model will only produce good performance once all the possible emotion expressions were perceived.

In this paper, we propose a personalized affective memory framework which improves the original affective memory model with respect to online emotion recognition. At the beginning of an interaction, humans tend to rely heavily on their own prior knowledge to understand facial expressions, with such prior knowledge learned over multiple episodes of interaction (Nook et al., 2015). We propose the use of a novel generative adversarial autoencoder model to learn the prior knowledge of emotional representations. We adapt the model to transfer the learned representations to unknown persons by generating edited faces with controllable emotion expressions (Huang et al., 2018; Wang et al., 2018). We use the generated faces of a single person to initialize a personalized affective memory. We then update the affective memory with the perceived faces on an online manner and use it to recognize facial expressions online.

We evaluate our model in two steps: first, how well the model can learn a general representation of emotion expressions, using the AffectNet (Mollahosseini et al., 2017) dataset. And second, how the novel personalized affective memory performs when recognizing emotion expressions of specific persons using the continuous expressions on the OMG-Emotion dataset (Barros et al., 2018). We compare our model with state-of-the-art solutions on both datasets. Besides performance alone, we also provide an exploratory study on how our model works based on its neural activities using activation visualization techniques.

2 The Personalized Affective Memory

The Personalized Affective Memory (P-AffMem) model is composed of two modules: the prior-knowledge learning module (PK) and the affective memory module. The prior-knowledge learning module consists of an adversarial autoencoder that learns facial expression representations () and generates images using controllable emotional information, represented as continuous arousal and valence values ().

After the autoencoder is trained, we use it to generate a series of edited images with different expressions from a person using different values of . We use this collection of generated faces as initialization to a growing-when-required (GWR) network which represents the personalized affective memory for that specific person. We use the clusters of the personalized affective memory to recognize the emotion expressions of that person. Figure 1 illustrates the topological details of the P-AffMem model.

Figure 1: The P-AffMem model is composed of the Prior-Knowledge Adversarial autoencoder (PK) and the affective memories. The PK implements an encoder/decoder architecture and specific discriminators to detail arousal and valence (), to ensure a prior distribution to the encoded representation () and to guarantee a photo-realistic image generation (). The affective memories are individual Growing-When-Required(GWR) networks which learn personalized aspects of the facial expression.

2.1 Prior-Knowledge Adversarial autoencoder

Our PK model is an extended version of the Conditional Adversarial Autoencoder (CAAE) (Zhang et al., 2017), which was developed to learn how to represent and edit age information into facial images. We choose the CAAE as the basis for our prior-knowledge module due to its capability of working without paired-data (images of the same person with different ages) and to its robustness against statistical variations in the input data distribution.

The PK model consists of an encoder and generator architecture ( and ), and three discriminator networks. The first one learns arousal/valence representations (

), the second guarantees that the facial expression representation has a uniform distribution (

) and the third one assures that the generated image is photo-realistic and expresses the desired emotion (). The model receives as input an image () and a continuous arousal/valence label (). It produces a facial representation () and an edited image expressing the chosen arousal/valence ().

2.1.1 Encoder and Generator

The encoder network (

) consists of four convolution layers and one fully connected output layer. We make use of a convolution stride of 2 to avoid the use of pooling, and thus learning its own spatial down-sampling

(Radford et al., 2015). receives as input an RGB image with a dimension of and outputs the facial expression representation () with a dimension of (). is concatenated with the desired emotional representation () and used as input to the decoder network () which generates an image () with the desired emotional expression . The decoder is composed of six transposed convolution layers. The first four have a stride of 2, while the last two have a stride of 1. All the convolution and transposed convolution of and use a kernel size of 5x5. All the layers of and

use ReLu activation functions. Both

and are trained with an image reconstruction loss ():

(1)

where is the mean absolute error.

One of the problems of the CAAE is the appearance of ghost artifacts on the reconstructed images. The CAAE authors address this problem by using a total variation minimization loss function

(Mahendran & Vedaldi, 2015), but with questionable results. We address this problem by using the identity-preserving loss () used by the ExprGan (Ding et al., 2017a) and the G2-GAN (Song et al., 2017) on the reconstructed image. The identity preserving loss is computed using a pre-trained VGG face model (Parkhi et al., 2015). The VGG face model proposes a topological update on the VGG16 model and is trained to identify persons based on their faces. To compute , we compare the activations of the five first convolutional layers of the VGG Face for and :

(2)

where is the activation of the th layer of the VGG face.

2.1.2 Arousal/Valence Representation Discriminator

The arousal/valence representation discriminator () enforces the encoder to learn facial representations which are able to be used on an emotion recognition task. This discriminator is important to guarantee that is suitable to represent emotion expressions as will be used to populate the affective memory. consists of two fully connected hidden layers, the first one implementing a ReLu activation function and the last one a linear function. The last layer is used as output to represent continuous values of arousal and valence. The loss is calculated as the

(3)

where MSE is the Mean-Squared error, and represent the arousal and valence output of respectively, and and the associated arousal and valence value to the image input .

2.1.3 Uniform Distribution Discriminator

The uniform distribution discriminator () is imposed on and enforces it to be uniformly distributed. It is important that is uniformly distributed to increase the generalization of the model when representing a facial expression. It receives as input itself or a randomly chosen sample from the uniform distribution and it has as goal to distinguish between them. is composed of four fully connected layers, each one implementing a ReLu activation function. The adversarial loss function between and is defined as:

(4)

where is the likelihood, the prior distribution imposed on the internal representation and the distribution of the training images.

2.1.4 Photo-realistic Image Discriminator

The photo-realistic image discriminator () is basically a normal real/fake discriminator on the typical Generative Adversarial Network (GAN). In our case it also serves as a mechanism to impose the decoder () to produce photo-realistic images with the desired emotion expression (). implements four convolution layers, with stride of 2 and kernel size of 5x5, followed by two fully connected layers. To enforce the desired arousal and valence () into the generated image (

), we perform a re-shape and zero-padding operation to

and concatenate it with each convolutional layer of . The concatenation of to all the convolutional layers proved to be an important step in our experiments towards generating a photo-realistic edited facial expression. To enforce the desired into produced by , we use the following adversarial loss function:

(5)

where is the distribution of the training data.

2.1.5 Overall Loss Function

To train our model we use an overall loss-function defined as:

(6)

in which the coefficients , , , and are used to balance the facial expression discrimination, the high-fidelity of the generated images and the presence of the desired emotion on the generated image.

2.2 Affective Memory

Growing-When-Required (GWR) networks (Marsland et al., 2002) were deployed recently to address the problems of continuous learning (Parisi et al., 2017, 2018). The capability of creating prototype neurons using an online manner allows the GWR to adapt quickly to changes on the input data. Which makes it ideal for our online learning mechanism.

Each neuron of the GWR consists of a weight vector

representing prototypical information of the input data. A newly perceived emotion expression will be associated with a best-matching unit (BMU) , which is calculated by minimizing the distances between the facial expression and all the neurons on the GWR. Given a set of neurons, with respect to the input is computed as:

(7)

New connections are created between the BMU and the second-BMU with relation to input. When a BMU is computed, all the neurons the BMU is connected to are referred to as its topological neighbors. Each neuron is equipped with a habituation counter

expressing how frequently it has been fired based on a simplified model of how the efficacy of a habituating synapse reduces over time.

The habituation rule is given by , where and are constants that control the decreasing behavior of the habituation counter (Marsland et al., 2002). To establish whether a neuron is habituated, its habituation counter must be smaller than a given habituation threshold .

The network is initialized with two neurons and, at each learning iteration, it inserts a new neuron whenever the activity of the network (, when an expression is perceived, of a habituated neuron is smaller than a given threshold , i.e., a new neuron is created if and .

The P-AffMem model uses the GWR as affective memories to perform personalization learning of prototype neurons. The GWR is created when the first facial expression of a person is perceived. The PK module generates 200 edited images with combinations of arousal and valence within the interval of [-1, 1], with increments of 0.01. These images are used to initialize the affective memory and act as a transfer knowledge from the PK initial estimations. To avoid that the generated samples dominate the affective memory update over time, we stop generating samples after a certain number of expressions were perceived.

To guarantee that the PK does not dominate the training of the affective memory over time, we use a novel update function to modulate the impact of the PK on the GWR. The training of the network is carried out by adapting the BMU according to:

(8)

where is a constant learning rate. If real faces are perceived, and they are different from the ones encoded by the PK, the activation of the network will be smaller and the impact on the weight update will be higher. The affective memory will be encouraged to create new neurons to represent newly perceived expressions instead of the ones coming from the PK.

To allow the GWR to perform classification of emotion expressions, we implement an associative labeling (Parisi et al., 2017) to each neuron. During the training phase, we assign to each neuron two continuous values, representing arousal and valence. When training with an example that comes from the generated images, we update the labels of the BMU using the desired arousal and valence. The update is modulated by a labeling learning factor which is defined during training. To categorize a newly perceived expression, we just read the labels of the BMU associated with it.

2.3 Parameter Optimization

We trained the PK model using and optimized the model’s topology, coefficients, the batch size and the training parameters using a TPE optimization strategy (Bergstra et al., 2013). The optimization was based on two characteristics: the objective performance on the arousal/valence discriminator and the minimization of . The final coefficient values are as follows: , , , and

. The batch size is 48. The normal distribution with mean 0 and standard deviation 0.02 is employed for the initialization of the weights of all layers. All biases are initially set to 0. For optimization, the Adam Optimizer with learning rate

, and is employed.

Parameter Value
Epochs 10
Activity threshold () 0.4
Habituation threshold () 0.2
Habituation modulation ( and ) 0.087, 0.032
Labeling factor () 0.4
Table 1: Training parameters of each affective memory.

We also use TPE optimization to tune the parameters of the affective memory. Although they were created individually for one specific person, all of them have the same hyperparameters to simplify our evaluation task and allow the model to be used online. Given that the GWR adapts to the input data distribution, and we maintain the same data nature, we do not believe that fine-tuning each GWR to each individual person would obtain a large gain on recognition performance. Table

1 displays the final affective memory parameters.

3 Experimental Setup

To evaluate and understand better the individual impact of the PK and the affective memory on the recognition of emotion expressions, we perform two different types of experiments. First, we run a series of ablation studies to asses the contribution of each mechanism on the PK for general emotion recognition. Second, we run an emotion recognition experiment with the entire framework to assess the impact of the personalization on the emotion recognition performance.

3.1 Datasets

The AffectNet dataset (Mollahosseini et al., 2017) is composed of more than 1 million images with facial expressions collected from different in-the-wild sources, such as Youtube and Google Images. More than 400 thousands images were manually annotated with continuous arousal and valence. The dataset has its own separation between training, testing and validation samples. The labels for the testing samples are not available, thus, all our experiments are performed using the training and validation samples. The large labelled data distribution of the AffectNet is important to guarantee that the PK learns general emotion recognition. It provides an ideal corpus to asses the impact of each proposed mechanism in the PK using an objectively comparable measure.

Furthermore, the AffectNet does not discriminate between images from the same person, so personalization does not play an important role. To evaluate the contributions of personalization, we provide final experiments on the One-Minute Gradual-Emotional Behavior dataset (OMG-Emotion) (Barros et al., 2018). The OMG-Emotion is composed of Youtube videos which are about one minute long and are annotated taking into consideration a continuous emotional behavior. The videos were selected using a crawler technique that uses specific keywords based on long-term emotional scenes such as ”monologues”, ”auditions”, ”dialogues” and ”emotional scenes”, which guarantees that each video has only one person performing an emotional display. A total of 675 videos were collected, which sums up to around 10h of data. Each utterance on the videos is annotated with two continuous labels, representing arousal and valence. The emotion expressions displayed in the OMG-Emotion dateset are heavily impacted by person-specific characteristics which are highlighted by the gradual change of emotional behavior over the entire video.

3.2 Experiments

Our experiments are divided into two categories: the ablation studies (A) and the personalized emotion recognition (P). The A category is divided into a series of experiments, each of them evaluating one of the discriminators on the PK: the photo-realistic image discriminator , the uniform distribution () and the arousal/valence discriminator(). We train the PK in each of these experiments with the training subset of the AffectNet corpus and evaluate it using the validation subset.

We first train the PK without any of the previously mentioned discriminators () to guarantee an unbiased baseline. Then, we repeat the training, now adding each discriminator individually. We report experimental results by combining the presence of each discriminator. Finally, we add all the discriminators () and train the PK again.

To provide a standard evaluation metric, we use the encoder representation (

) as input to an emotion recognition classifier. The classifier is composed of the same topology as the arousal/valence discriminator, and it is post-trained using the same training subset of the AffectNet dataset. When training the PK with the arousal/valence discriminator, we do not use the emotion recognition classifier, so that we can assess the performance of this specific discriminator.

The P category is divided into two experiments: first, the PK is pre-trained with the AffectNet dataset and used to evaluate the test set of the OMG-Emotion dataset. Then we use the entire P-AffMem framework with the affective memories and repeat the experiment, now using one affective memory for each video of the test set. During our optimization routine we found that if the PK generates faces for more than 1s of the videos, the affective memory did improve the final results. As the OMG-Emotion dataset has a framerate of 25 frames per second, we turn off the PK image generation after the first 25 frames.

To evaluate the dimensional arousal and valence recognition, we use the Concordance Correlation Coefficient (CCC) (Lawrence & Lin, 1989) between the outputs of the model and the true labels. The CCC can be computed as:

(9)

where is the Pearson’s Correlation Coefficient between model prediction labels and the annotations, and denote the mean for model predictions and the annotations and and

are the corresponding variances.

4 Results

4.1 Ablation Studies (A)

Our ablation studies, summarized in Table 2, help us to understand and quantify the impact of each of the PK modules on recognizing emotion expressions. The baseline model, without any extra discriminator, can be evaluated as a simple Generative Adversarial Network (GAN). Without surprise, the baseline model did not perform well on emotion recognition, achieving the worst result in our experiments. This reflects the inability of the baseline model to provide a good facial expression discrimination capability. Evaluating the model by introducing each discriminator allows us to asses their impact. The emotion recognition discriminator () seems to be the one which impacts most on the PK performance, which not surprising, as it enforces the PK encoder to learn specific characteristics for facial expression recognition. Nevertheless, we also observe a strong impact on the photo-realistic discriminator (), indicating that the network benefits greatly from enforcing the encoding and generation of characteristics that discriminate between individuals. The prior distribution discriminator () has a smaller individual contribution than all the others. However, when combined with the other discriminators, it provides a great improvement on the recognition.

Model Arousal Valence
0.03 0.05
+ 0.03 0.08
+ 0.08 0.09
+ 0.18 0.22
+ + 0.11 0.18
+ + 0.25 0.35
+ + 0.29 0.45
0.38 0.67
(Mollahosseini et al., 2017) 0.34 0.60
Table 2: Concordance Correlation Coefficient (CCC) for arousal and valence when evaluating the different discriminator of the PK on the validation subset of the AffectNet dataset.

Training the PK with all the discriminators yield the best results. To the best of our knowledge, the only reported CCC for arousal and valence on the AffectNet corpus comes from the authors of the dataset themselves (Mollahosseini et al., 2017). They use an AlexNet convolutional neural network (Krizhevsky et al., 2012) re-trained to recognize arousal and valence. Our PK model presents a better general performance improving the CCC in more than 0.4 for arousal and 0.7 for arousal. As our ablation studies are only intended to shed light on the impact of our PK modules, we did not pursue a pure benchmark study with other datasets.

4.2 Personalized Recognition (P)

we perform the experiments with and without the affective memory in order to quantify its contribution to the P-AffMem model. Table 3 summarizes the achieved CCC and the current state-of-the-art results on the OMG-Emotion dataset. The presence of the affective memories greatly improve the performance of the model with an increase on the achieved CCC of 0.9 and 0.7 for arousal and valence, respectively, when compared to the PK with all discriminators.

The P-AffMem achieved the best results up to this point on the OMG-Emotion dataset. This dataset was recently used as part of a challenge and different solutions, mostly based on pre-trained deep learning models, were presented. Our model achieved a CCC improvement of 0.9 on arousal and 0.4 in valence, when compared to the winner of the challenge (Zheng et al., 2018), which made use of audio/visual processing. The same occurred with the second best model (Peng et al., 2018). The best model which used only facial expressions (Deng et al., 2018) achieved an arousal and valence CCC which were 0.16 and 0.18 smaller than our model.

Model Arousal Valence
-0.06 -0.10
+ 0.02 0.01
+ 0.04 0.02
+ 0.09 0.12
+ + 0.13 0.13
+ + 0.21 0.29
+ + 0.27 0.36
0.32 0.46
0.43 0.53
(Zheng et al., 2018) 0.35 0.49
(Peng et al., 2018) 0.24 0.43
(Deng et al., 2018) 0.27 0.35
Table 3: Concordance Correlation Coefficient (CCC) for arousal and valence when evaluating our model on the testing subset of the OMG-Emotion dataset.

5 Discussions

The PK module must be pre-trained to allow a better emotion expression recognition generalization. With 16 million parameters, the PK model has an extensive and computational expensive training procedure, as common to any adversarial network training. Our hypothesis was that, once trained, the PK would provide a general emotion recognition capability. Our experiments with the OMG-Emotion dataset demonstrate that the PK achieved a performance similar to other pre-trained deep learning models on recognizing general emotion expressions.

The affective memory contributes to the P-AffMem by providing a self-adaptable mechanism that improves the performance of the PK by clustering the individual characteristics of how a person expresses emotions. This assumption was demonstrated objectively as the P-AffMem has a higher CCC than the PK alone.

To extend our understanding of the P-AffMem and to pinpoint our claims on general versus specific emotion recognition, we will discuss in the next sessions the individual contributions of both the PK and the affective memories. We also present an insight into how the models contribute to the performance increase.

5.1 The Impact of the Discriminator Mechanisms

Figure 2: The first pair of the column represents the perceived emotion expression and the neural activation map of the last layer of the encoder. The remaining pairs of column display the edited face using the indicated arousal and valence value, and its respective neural activation map.

As an adversarial autoencoder, the PK is trained to both encode and edit a facial image. Each discriminator imposes a specific characteristic to both the encoding and editing tasks, and together they showed to be optimal, in our setup, for emotion expression recognition. While the impact of the discriminators on encoding a facial expression was objectively measured by our experiments, its contributions to face editing were just implicitly evaluated. By demonstrating that the affective memory improved the performance of the model, we guarantee that the edited faces used to initialize and train the affective memory actually have the desired facial expressions.

As the model is optimized to recognize facial expressions, an analysis of the pixel-wise quality of the edited faces can be misleading. To illustrate this assumption, we fed one image to each of the PK combinations proposed in our A category experiments and produced three edited faces with the extreme values of arousal and valence [-1,-1] and [1,1], and a neutral value [0,0]. We then calculated the mean neural activation maps (Zhou et al., 2016) for the last convolution layers of the discriminator for each of these examples. Figure 2 exhibits four pairs of columns, where the first is composed of the original image fed to the PKs and the obtained encoder activation visualizations. The other tree columns exhibit the edited images with the indicated arousal and valence values and the encoder activation. As the training of the encoder is directly affected by the discriminators, the visualizations give us an indication on how each discriminator impacts the learned representations, and help us to explain the network objective performance.

Clearly, the encoder trained only with does not learn any meaningful information, which is reflected in the edited images and activation. This is also backed up by our objective results. The combination of the discriminators impact the edited faces and encoder in a distinguishable manner. While enforces a realistic characteristic on the edited faces, it does impact the encoder on focusing facial structures which are not reliable for face expression representation. This effect translates on a lower CCC when compared to , for example. When is present, the encoder clearly filtered facial expression-rich regions, such as eyes and mouth regions. This is clearly perceptible in the combination, which presents a distinguishable facial editing but lower CCC than , for example. The encoder activation of the former indicates that it ignores some facial characteristics, which could explain the lower performance.

Finally, the combination of all discriminators gave the PK the best objective performance, and this reflects on an edited image and activation. The images contain present clear facial features, without the formation of distorting artifacts, maintaining the general facial proportions and characteristics. The activation shows that the encoder clearly focuses on the eyes-to-mouth region most of the times. These visualizations demonstrate the reliability of the PK in both encoding and editing facial expressions, which is very important for the optimal functioning of the affective memory.

5.2 Affective Memory Behavior

Figure 3: Performance of the model, measured in CCC for both PK and P-AffMem and neural activation when processing the ”fd41c38b2” video of the OMG-Emotion dataset.

The videos on the OMG-Emotion dataset are heavily impacted by personalization characteristics, as they contain one person performing a monologue-like act for more than 1 minute. As the P-AffMem is initialized with the images edited by the PK, it contains, from the beginning, prototype neurons with a wide range of associated arousal and valence labels. As the video goes on, the neurons of the affective memory are much more influenced by newly perceived facial expressions which make the neural activation increase. Important to note that a newly perceived expression does not change the neuron’s labels, only its prototypical information. This way, we guarantee that the network maintains a reliable classification.

As the PK already gives the affective memory general prototype neurons with reliable labels, the neural update makes them much more accurate towards specific characteristics of the newly perceived expressions. After some seconds have passed, the affective memory starts to perform better than the PK. This effect is demonstrated in Figure 3, which illustrates the performance of the PK and the P-AffMem, measured as arousal and valence CCC, and the neural activity evolution of the affective memory when processing the video ”fd41c38b2” of the OMG-Emotion dataset.

6 Conclusion and Future Work

The development of our Personalized Affective Memory (P-AffMem) model was inspired by two facial perception characteristics: the understanding of generalized emotional concepts and the online adaptation of individualized aspects of facial expressions. The general prior knowledge (PK) adversarial autoencoder was trained to transfer facial characteristics, represented as arousal and valence, to unknown persons. The affective memories contributed to the learned representations by creating prototypical representations of emotions for a specific person in an online fashion. Our evaluation demonstrated that our model achieved state-of-the-art performance on facial expression recognition, and we presented different insights on how, and why, our model works.

A clear limitation of our model is the processing of instantaneous emotion expressions. To address this problem, recent work on recurrent self-organization and sequence generation would be encouraged (Parisi et al., 2017, 2018). Another direction which could be explored is the integration of multisensory information, but with particular care for asynchronous affective perception, e.g., from prosodic speech and language understanding.

Acknowledgments

The authors gratefully acknowledge partial support from the German Research Foundation DFG under project CML (TRR 169).

References

  • Adolphs (2002) Adolphs, R. Recognizing emotion from facial expressions: psychological and neurological mechanisms. Behavioral and cognitive neuroscience reviews, 1(1):21–62, 2002.
  • Barros & Wermter (2017) Barros, P. and Wermter, S. A self-organizing model for affective memory. In Neural Networks (IJCNN), 2017 International Joint Conference on, pp. 31–38. IEEE, 2017.
  • Barros et al. (2018) Barros, P., Churamani, N., Lakomkin, E., Siqueira, H., Sutherland, A., and Wermter, S. The omg-emotion behavior dataset, Jul 2018.
  • Bergstra et al. (2013) Bergstra, J., Yamins, D., and Cox, D. D. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science Conference, pp. 13–20. Citeseer, 2013.
  • Chen et al. (2014) Chen, Y.-A., Wang, J.-C., Yang, Y.-H., and Chen, H. Linear regression-based adaptation of music emotion recognition models for personalization. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 2149–2153. IEEE, 2014.
  • de Bittencourt Zavan et al. (2017) de Bittencourt Zavan, F. H., Gasparin, N., Batista, J. C., e Silva, L. P., Albiero, V., Bellon, O. R. P., and Silva, L. Face analysis in the wild. In Graphics, Patterns and Images Tutorials (SIBGRAPI-T), 2017 30th SIBGRAPI Conference on, pp. 9–16. IEEE, 2017.
  • Deng et al. (2018) Deng, D., Zhou, Y., Pi, J., and Shi, B. E. Multimodal utterance-level affect analysis using visual, audio and text features. arXiv preprint arXiv:1805.00625, 2018.
  • Dhall et al. (2018) Dhall, A., Kaur, A., Goecke, R., and Gedeon, T. Emotiw 2018: Audio-video, student engagement and group-level affect prediction. In Proceedings of the 2018 on International Conference on Multimodal Interaction, pp. 653–656. ACM, 2018.
  • Ding et al. (2017a) Ding, H., Sricharan, K., and Chellappa, R. Exprgan: Facial expression editing with controllable expression intensity. arXiv preprint arXiv:1709.03842, 2017a.
  • Ding et al. (2017b) Ding, H., Zhou, S. K., and Chellappa, R.

    Facenet2expnet: Regularizing a deep face recognition net for expression recognition.

    In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, pp. 118–126. IEEE, 2017b.
  • Hamann & Canli (2004) Hamann, S. and Canli, T. Individual differences in emotion processing. Current opinion in neurobiology, 14(2):233–238, 2004.
  • Huang et al. (2018) Huang, B., Chen, W., Wu, X., Lin, C.-L., and Suganthan, P. N. High-quality face image generated with conditional boundary equilibrium generative adversarial networks. Pattern Recognition Letters, 111:72–79, 2018.
  • Kahou et al. (2016) Kahou, S. E., Bouthillier, X., Lamblin, P., Gulcehre, C., Michalski, V., Konda, K., Jean, S., Froumenty, P., Dauphin, Y., Boulanger-Lewandowski, N., et al. Emonets: Multimodal deep learning approaches for emotion recognition in video. Journal on Multimodal User Interfaces, 10(2):99–111, 2016.
  • Kaya et al. (2017) Kaya, H., Gürpınar, F., and Salah, A. A. Video-based emotion recognition in the wild using deep transfer learning and score fusion. Image and Vision Computing, 65:66–75, 2017.
  • Kim et al. (2017) Kim, Y., Yoo, B., Kwak, Y., Choi, C., and Kim, J. Deep generative-contrastive networks for facial expression recognition. arXiv preprint arXiv:1703.07140, 2017.
  • Kollias & Zafeiriou (2018) Kollias, D. and Zafeiriou, S. Training deep neural networks with different datasets in-the-wild: The emotion recognition paradigm. In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE, 2018.
  • Krizhevsky et al. (2012) Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012.
  • Lawrence & Lin (1989) Lawrence, I. and Lin, K. A concordance correlation coefficient to evaluate reproducibility. Biometrics, pp. 255–268, 1989.
  • Mahendran & Vedaldi (2015) Mahendran, A. and Vedaldi, A. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188–5196, 2015.
  • Marsland et al. (2002) Marsland, S., Shapiro, J., and Nehmzow, U. A self-organising network that grows when required. Neural Networks, 15(8–9):1041–1058, 2002.
  • Mayer et al. (1990) Mayer, J. D., DiPaolo, M., and Salovey, P. Perceiving affective content in ambiguous visual stimuli: A component of emotional intelligence. Journal of personality assessment, 54(3-4):772–781, 1990.
  • Mehta et al. (2018) Mehta, D., Siddiqui, M. F. H., and Javaid, A. Y. Facial emotion recognition: A survey and real-world user experiences in mixed reality. Sensors, 18(2):416, 2018.
  • Mollahosseini et al. (2017) Mollahosseini, A., Hasani, B., and Mahoor, M. H. Affectnet: A database for facial expression, valence, and arousal computing in the wild. arXiv preprint arXiv:1708.03985, 2017.
  • Ng et al. (2015) Ng, H.-W., Nguyen, V. D., Vonikakis, V., and Winkler, S. Deep learning for emotion recognition on small datasets using transfer learning. In Proceedings of the 2015 ACM on international conference on multimodal interaction, pp. 443–449. ACM, 2015.
  • Nook et al. (2015) Nook, E. C., Lindquist, K. A., and Zaki, J. A new look at emotion perception: Concepts speed and shape facial emotion recognition. Emotion, 15(5):569, 2015.
  • Parisi et al. (2018) Parisi, G., Tani, J., Weber, C., and Wermter, S. Lifelong learning of spatiotemporal representations with dual-memory recurrent self-organization. In arXiv:1805.10966, 2018.
  • Parisi et al. (2017) Parisi, G. I., Tani, J., Weber, C., and Wermter, S. Lifelong learning of humans actions with deep neural network self-organization. Neural Networks, 96:137–149, 2017.
  • Parkhi et al. (2015) Parkhi, O. M., Vedaldi, A., Zisserman, A., et al. Deep face recognition. In BMVC, volume 1, pp.  6, 2015.
  • Peng et al. (2018) Peng, S., Zhang, L., Ban, Y., Fang, M., and Winkler, S. A deep network for arousal-valence emotion prediction with acoustic-visual cues. arXiv preprint arXiv:1805.00638, 2018.
  • Pons & Masip (2018) Pons, G. and Masip, D. Supervised committee of convolutional neural networks in automated facial expression analysis. IEEE Transactions on Affective Computing, 9(3):343–350, 2018.
  • Radford et al. (2015) Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • Russell (2017) Russell, J. A. Cross-cultural similarities and differences in affective processing and expression. In Emotions and Affect in Human Factors and Human-Computer Interaction, pp. 123–141. Elsevier, 2017.
  • Saha et al. (2018) Saha, S., Navarathna, R., Helminger, L., and Weber, R. M. Unsupervised deep representations for learning audience facial behaviors. arXiv preprint arXiv:1805.04136, 2018.
  • Sariyanidi et al. (2015) Sariyanidi, E., Gunes, H., and Cavallaro, A. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE transactions on pattern analysis and machine intelligence, 37(6):1113–1133, 2015.
  • Schmidhuber (2015) Schmidhuber, J. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.
  • Soleymani et al. (2017) Soleymani, M., Garcia, D., Jou, B., Schuller, B., Chang, S.-F., and Pantic, M.

    A survey of multimodal sentiment analysis.

    Image and Vision Computing, 65:3–14, 2017.
  • Song et al. (2017) Song, L., Lu, Z., He, R., Sun, Z., and Tan, T. Geometry guided adversarial facial expression synthesis. arXiv preprint arXiv:1712.03474, 2017.
  • Sprengelmeyer et al. (1998) Sprengelmeyer, R., Rausch, M., Eysel, U. T., and Przuntek, H. Neural structures associated with recognition of facial expressions of basic emotions. Proceedings of the Royal Society of London B: Biological Sciences, 265(1409):1927–1931, 1998.
  • Valenza et al. (2014) Valenza, G., Citi, L., Lanatá, A., Scilingo, E. P., and Barbieri, R. Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics. Scientific reports, 4:4998, 2014.
  • Wang et al. (2018) Wang, X., Li, W., Mu, G., Huang, D., and Wang, Y. Facial expression synthesis by u-net conditional generative adversarial networks. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, pp. 283–290. ACM, 2018.
  • Zadeh et al. (2016) Zadeh, A., Zellers, R., Pincus, E., and Morency, L.-P. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82–88, 2016.
  • Zen et al. (2014) Zen, G., Sangineto, E., Ricci, E., and Sebe, N. Unsupervised domain adaptation for personalized facial emotion recognition. In Proceedings of the 16th international conference on multimodal interaction, pp. 128–135. ACM, 2014.
  • Zhang et al. (2017) Zhang, Z., Song, Y., and Qi, H. Age progression/regression by conditional adversarial autoencoder. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2017.
  • Zhao et al. (2003) Zhao, W., Chellappa, R., Phillips, P. J., and Rosenfeld, A. Face recognition: A literature survey. ACM computing surveys (CSUR), 35(4):399–458, 2003.
  • Zheng et al. (2018) Zheng, Z., Cao, C., Chen, X., and Xu, G. Multimodal emotion recognition for one-minute-gradual emotion challenge. arXiv preprint arXiv:1805.01060, 2018.
  • Zhou et al. (2016) Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A.

    Learning deep features for discriminative localization.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.