A Deep Network for Arousal-Valence Emotion Prediction with Acoustic-Visual Cues

05/02/2018 ∙ by Songyou Peng, et al. ∙ Inria University of Illinois at Urbana-Champaign 0

In this paper, we comprehensively describe the methodology of our submissions to the One-Minute Gradual-Emotion Behavior Challenge 2018.



There are no comments yet.


page 2

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In this paper, we comprehensively describe the methodology of our submissions to the One-Minute Gradual-Emotion Behavior Challenge (OMG-Emotion). Section II introduces the representation of videos and audios that we use as the input of deep networks. The designation of model architectures are depicted in section II, followed by the results in section IV and conclusion in Section V. Source codes for this paper are available 111https://github.com/pengsongyou/OMG-ADSC.

Ii Data Representation

In our two submissions, our models use either only visual input or both visual and acoustic input. This section details how we preprocess these two modalities from the provided OMG-emotion dataset [1].

Ii-a Acoustic Representation

Since the audio files are not provided separately, we first convert all snippets to WAV files, each one of which is single-channel and sampled at 16kHz. Similar to [6]

, spectrograms are then calculated every 10ms with a sliding hamming window of width 25ms and 512-point FFT. We assume that 3 seconds of the audio signal contains emotion information, thus 3 seconds of audio signal is taken to get the short-time Fourier transform (STFT) spectrum. Since each frequency bin in the spectrum is a complex number, we obtain an STFT map of size

, where indicates both the real and imaginary parts of the acquired STFT values.

Ii-B Visual Representation

For each utterance in the dataset, we initially extract all frames with OpenCV [3]. MTCNN [9] is then applied to detect and align faces. Since we employ SphereFace [5] as the backbone network for videos, all face images are resized to .

Iii Network Architecture

With the preprocessed video and audio data, we design and adopt the following deep networks to the arousal-valence regression problem. The overall architecture can be viewed in figure 1.

Fig. 1: The workflow of the joint training architecture.

Iii-a Audio Model (ANet)

We use a rather straightforward network for the audio stream. The STFT maps is input to the base network VGG-16 [4]

pretrained on ImageNet. Since the depth dimension of an STFT map is

, we modified the first layer of VGG-16 accordingly. The output feature is then fed into two fully-connected (FC) layers with dropouts in between.

If we solely train the network on audios, another FC layer and the Tanh function are applied to acquire the final arousal and valence values.

Iii-B Video Model (VNet)

Due to the fact that the length of every snippet in dataset varies from seconds up to seconds, the number of extracted frames may be quite different among snippets. In order to fully utilize the temporal information with feasible GPU memory utilization, we first sparsely sample frames from a snippet. Inspired by [8], we divide a snippet in order into segments, from which a single frame is randomly sampled.

For each of the selected frames, an intermediate feature (dim=512) can be obtained from SphereFace. Then, these features are fed into a bidirectional LSTM. Finally, an temporal-average pooling layer followed by an FC and Tanh are employed to acquire two emotion scores.

Iii-C Audio-Video Joint Training

For the sake of jointly training ANet and VNet together, we design the following scheme. First we take the VNet and ANet solely trained beforehand. With VNet kept the same, we require to sample STFT maps from every snippet and then average their outputs of the penultimate FC layer in ANet. Then, we simply concatenate the ANet and VNet features and feed into another FC layer followed by Tanh. Figure 1 illustrates the architecture of the joint training.

Iii-D Implementation details

The architectures are implemented in PyTorch 

[7]. In joint training, We train with an initial learning rate of

and decrease by a factor of 10 every 7 epochs. For each mini-batch,


are set to 4 and 16, respectively. Batch size is 6. We also sue gradient clipping when the norm is over 20. With one NVIDIA GTX TITAN X, it takes around 7 minutes for one epoch.

It should be noted that one important difference between joint training and training VNet and ANet separately is the loss function. MSE loss is employed for sole training while CCC for joint training.

Iv Results

In the part, we briefly illustrate the performance of our models over the provided baseline methods in CCC.

Table I compares our ANet with a method [2] pretrained on RAVDESS and another method on OpenSmile. It shows that even without pretraining on any audio dataset, our ANet still outperforms the baselines in both arousal and valence scores.

Arousal Valence Total
Baseline [2] 0.08 0.10 0.18
Baseline (OS) 0.15 0.21 0.36
Our ANet 0.1879 0.256 0.4439
TABLE I: CCC Comparison when using only Audio. “OS” indicates using OpenSmile Dataset

In table II, VNet shows superior performance over the baseline [2]. VNet doubles both CCC of arousal and valence.

Arousal Valence Total
Video Baseline [2] 0.12 0.23 0.35
Our VNet 0.2798 0.4688 0.7486
TABLE II: CCC Comparison when using only Video (submission #1)

Finally, table III illustrates the effectiveness of jointly training networks for both audio and video streams. Further performance improvement has been achieved with such a training scheme.

Arousal Valence Total
Joint Training 0.3036 0.4796 0.7832
TABLE III: CCC values of our Audio-Video joint training (submission #2)

V Conclusion

This paper describes a novel architecture for arousal-valence estimation on the OMG-Emotion Dataset. We have shown the advantage of our deep network over the baseline methods.


  • [1] Pablo Barros, Nikhil Churamani, Egor Lakomkin, Henrique Siqueira, Alexander Sutherland, and Stefan Wermter. The OMG-emotion behavior dataset. arXiv preprint arXiv:1803.05434, 2018.
  • [2] Pablo Barros and Stefan Wermter. Developing crossmodal expression recognition based on a deep neural model. Adaptive behavior, 24(5):373–396, 2016.
  • [3] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.
  • [4] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In BMVC, 2014.
  • [5] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song.

    Sphereface: Deep hypersphere embedding for face recognition.

    In CVPR, volume 1, 2017.
  • [6] A. Nagrani, J. S. Chung, and A. Zisserman. Voxceleb: a large-scale speaker identification dataset. In INTERSPEECH, 2017.
  • [7] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
  • [8] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Val Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016.
  • [9] Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499–1503, 2016.