Automatic Group Cohesiveness Detection With Multi-modal Features

10/02/2019
by   Bin Zhu, et al.
University of Delaware
0

Group cohesiveness is a compelling and often studied composition in group dynamics and group performance. The enormous number of web images of groups of people can be used to develop an effective method to detect group cohesiveness. This paper introduces an automatic group cohesiveness prediction method for the 7th Emotion Recognition in the Wild (EmotiW 2019) Grand Challenge in the category of Group-based Cohesion Prediction. The task is to predict the cohesive level for a group of people in images. To tackle this problem, a hybrid network including regression models which are separately trained on face features, skeleton features, and scene features is proposed. Predicted regression values, corresponding to each feature, are fused for the final cohesive intensity. Experimental results demonstrate that the proposed hybrid network is effective and makes promising improvements. A mean squared error (MSE) of 0.444 is achieved on the testing sets which outperforms the baseline MSE of 0.5.

READ FULL TEXT VIEW PDF

page 3

page 4

10/12/2016

Analyzing the Affect of a Group of People Using Multi-modal Framework

Millions of images on the web enable us to explore images from social ev...
07/09/2018

An Attention Model for group-level emotion recognition

In this paper we propose a new approach for classifying the global emoti...
05/03/2019

Group Emotion Recognition Using Machine Learning

Automatic facial emotion recognition is a challenging task that has gain...
09/15/2020

Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach

This article presents our unimodal privacy-safe and non-individual propo...
09/03/2018

A Global Alignment Kernel based Approach for Group-level Happiness Intensity Estimation

With the progress in automatic human behavior understanding, analysing t...
07/24/2017

Feature Extraction via Recurrent Random Deep Ensembles and its Application in Gruop-level Happiness Estimation

This paper presents a novel ensemble framework to extract highly discrim...
12/31/2018

Predicting Group Cohesiveness in Images

Cohesiveness of a group is an essential indicator of emotional state, st...

1. Introduction

Group Cohesiveness plays an important role in the study of small group behavior, social psychology, group dynamics, sport psychology, and organizational behavior (Evans and Jarvis, 1980; Hogg, 1993). Cohesiveness has been found to be one of the critical influencing factors in group performance. Several studies have shown that strong group performance is associated with a high level of group cohesion among the members (Banwo et al., 2015; Dyaram and Kamalanabhan, 2005). Moreover, recent research (Ghosh et al., 2019) shows that group cohesion is highly correlated to group-level emotion.

The rapid growth of web images, driven by photo hosting and sharing services such as Flickr, FaceBook, and Google Photos, has gradually and significantly changed our life style (Miller and Keith Edwards, 2007)

. Many of these images are taken when people are attending meaningful social events, such as graduations, birthday parties, and family gatherings. Such images not only capture these most precious moments, but also have useful information that can be used to analyze group-level social attributes such as group cohesion. The availability of these images motivates the design of automatic systems capable of understanding human perception of cohesion at the group level.

Measuring and annotating group cohesion at different levels is often difficult for a human annotator, because cohesion has team and individual components (Salas et al., 2015). The problem of group cohesiveness prediction becomes even more challenging in static images. Complications include face occlusions, illumination variations, head pose variations, varied indoor and outdoor settings, faces at different distances from the camera, and low-resolution face images. In this paper, we propose a robust ensemble model that separately processes various high-level information of faces, skeletons, and scenes. Then, regression values are calculated and fused for the final cohesive intensity. In the 7th Emotion Recognition in the Wild (EmotiW 2019) Sub-Challenge (Dhall et al., 2019), the proposed hybrid model achieves a competitive result.

Figure 1. Overall Proposed Hybrid Network structure.

2. Related Work

Many researchers have employed the rapidly developing computer vision and machine learning techniques to machine understanding of images and videos. One specific task is to study groups of people from images.

Photos of groups of people during social gatherings, such as birthday parties, graduations, and family reunions, are widely available. (Gallagher and Chen, 2009)

introduces contextual features that capture the structure of a group of people and the position of individuals within the group. This social context helps to accomplish a variety of tasks such as the following: identifying the demographics of people in the group, estimating camera and scene parameters, and classifying group events.

Recently, the EmotiW 2019 Challenge organizers presented the first study of group cohesion prediction in static images (Ghosh et al., 2019). The challenge organizer extends the Group Affect Database (Dhall et al., 2017) with group cohesion labels and proposes the new GAF Cohesion database. Two deep cohesion models, separately trained on holistic and face-level features, achieve results on the Cohesion database which approximate human-level performance. Motivated by considering cohesiveness as an attribute of group emotion, the paper jointly trains an inception V3 model on both group emotion and group cohesion. From the experimental results, joint training on both emotion and cohesion achieves a higher performance than individual training. It strongly infers that group emotion and cohesion are correlated.

3. Proposed Method

The system pipeline is shown in Figure 1. The basic idea of the proposed approach is to train a Support Vector Regression (SVR)

(Vapnik and Lerner, 1963) with high-level features of the input images from different representations. The predicted regression values are fused by using a grid search to achieve the final prediction.

3.1. Scene Features

Holistic (scene-level) information is shown to be the important component in group-level classification in (Yuanjun Xiong et al., 2015; Guo et al., 2018, 2017)

. While analyzing the cohesiveness of a group of people, it is essential to understand the environments behind the people, e.g., students in a lecture tend to have a low cohesion level, while a group people standing and protesting at a plaza probably have high cohesiveness. In order to extract the high-level interpretations of the holistic information, a state of art deep model Densely Connected Convolutional Network (DenseNets)

(Huang et al., 2016) is applied.

DenseNets have several important advantages: alleviating the vanishing-gradient problem, strengthening feature propagation, feature reusing, and substantially reducing the number of parameters. DenseNets accomplish significant improvements over the state-of-the-art on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). Moreover, before extracting holistic features by using DenseNets, we fine-tune the Densenents network on Emotic Dataset

(Kosti et al., 2017). Group cohesion level is relevant to the group-level emotion or valance degree. The Emotic Dataset consists of a total of 18,316 images that are labeled in two methods, 26 emotion discrete categories, and valence continuous dimensions scaled from 1 to 10. A pretrained (on Imagenet) DenseNet161 model is fine-tuned by using the Emotic Dataset labeled in continuous dimensions. With the exception of the last layer, a size 2208 feature vector is extracted for each original image.

3.2. Face Feature

Considering the high correlation between group-emotion and group cohesion, the overall facial emotion stage of a group of people can contribute to group cohesiveness detection. The sample images shown in Figure 2 demonstrate that the average facial expression among all faces is a substantial indicator of group cohesiveness in the image. For instance, if most of the faces are classified as neutral expressions, the group cohesion level tends to have a lower value. In such a manner, faces are extracted by using Multi-task Cascaded Convolutional Network (MTCNN) which is effectively detecting and aligning faces in real time and achieves superior accuracy on the challenges FDDB and WIDER FACE benchmark for face detection and AFLW benchmark for face alignment

(Zhang et al., 2016).

The VGG Face is a deep network, containing 22 layers and 37 deep units, trained on a very large scale dataset (Parkhi et al., 2015). This dataset contains 2.6M images with over 2.6K people which is assembled by a combination of automation and manual operations. The fine-tuned VGG Face model is often used as a feature extractor to extract the activation vector of the fully connected layer in the CNN architecture. It has proven more efficient than a trained from scratch model (Guo et al., 2016, 2018). In furtherance of exploiting the high-level abstractions of extracted faces, the VGG Face model is trained on the facial expression dataset FER 2013 (Goodfellow and others, 2013). Then, VGG Face considered as a feature extractor with the last fully connected layer removed, computes a size of 4096 feature vector for all faces. Moreover, we obtain a different representation for each face. To train our SVR model, a single representation of each image is required. However, simply concatenating all feature vectors is invalid because each image can consist of a different number of faces. In this way, the face feature vectors are averaged to obtain a single facial feature vector to feed into the SVR predictor.

3.3. Skeleton Feature

As shown in Figure 3, skeleton features demonstrate salient patterns of different categories through facial expressions, poses, gestures, and the structures of groups of people. In this work, the skeleton of each image is extracted using OpenPose (Cao et al., 2016; Tomas et al., 2017; Wei et al., 2016), which can jointly detect human body, hand, and facial keypoints (in total 135 keypoints) on each image. Furthermore, the Openface library contains multiple functions such as 2D real-time multi-person keypoint detection, 3D real-time single-person keypoint detection, a calibration toolbox, and single-person tracking.

A new model, EfficientNet, achieves state-of-the-art accuracy on ImageNet, CIFAR-100, and Flowers, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet (Tan and Le, 2019)

. EfficientNet is powered by a novel scaling method and the advanced Automated machine learning (AutoML). The heuristic model scaling method uses a simple yet highly effective compound coefficient to scale up CNNs in a more structured manner. Moreover, this method uniformly scales each dimension with fixed scaling coefficients. This scaling is different from traditional approaches, e.g., ResNet arbitrarily scales up layers from Resnet-18 to Resnet-50, Resnet-101 and Resnet-152, while they usually require tedious manual tuning. A pre-trained (on Imagenet) EfficentNet model, with the exception of the last layer, extracts a size of 1536 feature vector for each original image.

Figure 2. Samples of faces. Top: High-level Group Cohesiveness Below: Low-level Group Cohesiveness.
Figure 3. Samples of skeleton feature representations. Left: High-level Group Cohesiveness Right: Low-level Group Cohesiveness.

4. Experiments

4.1. Dataset

The group cohesiveness prediction dataset in Emotiw 2019 contains a total of 14,175 images. It is split into three parts: 9815 images for training, 4,349 images for validation, and 3011 images for testing. The database consists of all images in GAF 3.0 database (Dhall et al., 2017), and new set of images are added and collected via web crawlers with various keywords related to social activities, e.g., wedding, birthday party, riot, and protest, etc. The dataset is labeled in four categories as cohesive level 0, 1, 2 and 3.

To better understand the perception of group cohesion and improve the labeling of the dataset, the Emotiw 2019 Challenge conducted a survey via a Google form with 102 participants (59 male and 43 female) whose age ranges from 22 to 54. The survey contained 24 images of groups of people in different contexts and has 4 different Group Cohesion Score (GCS) values. The participants selected one of GCS values for each image and described reasons behind their choice by using provided keywords related to the AGC score.

With the assistance of the survey results, we employed 5 annotators (3 females and 2 males) labeling each image for its cohesiveness in the range [0,3].

4.2. Experiment setting

The deep networks (DenseNet, EfficientNet and VGG FACE) are implemented in Pytorch powered by NVIDIA GFORCE 1080. The original images are resized to 224x224 to fit the CNNs as input, and the provided labels are normalized from [0, 3] to [0, 1]. After reviewing the training dataset, we notice that the dataset is severely imbalanced. The distribution of the training dataset is as follows: 1141 images belong to level 0, 1561 for level 1, 4601 for level 2, and 1997 for level 3. To balance the data, 30% of the images from the category of level 2 are down-sampled.

Method Dataset MSE
Baseline Train 0.84
Face Train 0.703
Skeleton Train 0.775
Scene Train 0.731
Fusion+Average Train 0.691
Fusion+Grid Search Train 0.683
Face Balanced Train 0.678
Fusion+Average Balanced Train 0.672
Fusion+Grid Search Balanced Train 0.662
Table 1. Performance on the validation set.

4.3. Experiment results

We conduct experiments on both original training set and balanced training set, and the table 1 shows the validation results. As shown in table 1, our fusion model significantly decreases the MSE. Due to the bias in the training data, data augmentation is important in this challenge and we achieve the lowest MSE of 0.662 on validation set by using our proposed approach with balanced training data. For the test phase, we use the fusion model which achieves the best result on validation. Table 2 summarizes our 5 submission results. Table 3 presents submission results of MSE corresponding to each individual cohesive level. To make use of all available data, we combine both training data and validation data to train our model. However, the performance is decreased, and submission 2 and submission 5 demonstrate the conclusion. The possible reason is the combined data without modification are severely biased which causes model over-fitting. Eventually, in submission 4, our model achieves the best MSE 0.444 on combined data with data augmentation.

Sub Method Dataset Test MSE
1 Fusion+Average Train 0.466
2 Fusion+Average Train + Val 0.478
3 Fusion+Average Balanced Train + Val 0.466
4 Fusion+Grid Search Balanced Train + Val 0.444
5 Fusion+Grid Search Train + Val 0.447
Table 2. Submission Results

5. Conclusion

In summary, group cohesiveness is a major component for analyzing group behavior, group performance, group emotion etc. A large number of images, taken from social gathering and social activities, are shared on online photo services such as Flickr and Facebook.

In addition, measuring and annotating group cohesion at different levels for a human annotator is usually time consuming and inefficient. In this paper, we construct a robust ensemble hybrid regression model to automatically and effectively detect group cohesiveness. The model is separately trained on faces, skeletons, and scenes. The regression values are fused for the final cohesive intensity. Our experiments deliver a mean squared error of 0.662 and 0.444 on the validation and testing sets, respectively. This MSE outperforms the baseline MSE of 0.5. The result demonstrates that the proposed hybrid model is effective and makes promising improvements.

References

  • A. Banwo, J. Du, and U. Onokala (2015) The impact of group cohesiveness on organizational performance: the nigerian case. International Journal of Business and Management 10, pp. . External Links: Document Cited by: §1.
  • Z. Cao, T. Simon, S. Wei, and Y. Sheikh (2016)

    Realtime multi-person 2D pose estimation using part affinity fields

    .
    arXiv preprint arXiv:1611.08050. Cited by: §3.3.
  • A. Dhall, R. Goecke, S. Ghosh, and T. Gedeon (2019) EmotiW 2019: automatic emotion, engagement and cohesion predictiontasks. ACM International Conference on Multimodal Interaction 2019. Cited by: §1.
  • A. Dhall, R. Goecke, S. Ghosh, J. Joshi, J. Hoey, and T. Gedeon (2017) From individual to group-level emotion recognition: emotiw 5.0. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, ICMI ’17, New York, NY, USA, pp. 524–528. External Links: ISBN 978-1-4503-5543-8, Link, Document Cited by: §2, §4.1.
  • L. Dyaram and T. J. Kamalanabhan (2005) Unearthed: the other side of group cohesiveness. Journal of Social Sciences 10 (3), pp. 185–190. External Links: Document, Link, https://doi.org/10.1080/09718923.2005.11892479 Cited by: §1.
  • N. J. Evans and P. A. Jarvis (1980) Group cohesion: a review and reevaluation. Small Group Behavior 11 (4), pp. 359–370. External Links: Document, Link, https://doi.org/10.1177/104649648001100401 Cited by: §1.
  • A. C. Gallagher and T. Chen (2009) Understanding images of groups of people. In

    2009 IEEE Conference on Computer Vision and Pattern Recognition

    ,
    Vol. , pp. 256–263. External Links: Document, ISSN 1063-6919 Cited by: §2.
  • S. Ghosh, A. Dhall, N. Sebe, and T. Gedeon (2019) Predicting Group Cohesiveness in Images. In

    International Joint Conference on Neural Networks (IJCNN)

    ,
    Cited by: §1, §2.
  • I.J. Goodfellow et al. (2013) Challenges in representation learning: a report on three machine learning contests. In International Conference on Neural Information Processing, pp. 117–124. Cited by: §3.2.
  • X. Guo, L. F. Polanía, and K. E. Barner (2017) Group-level emotion recognition using deep models on image scene, faces, and skeletons. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, pp. 603–608. Cited by: §3.1.
  • X. Guo, L. F. Polanía, and K. E. Barner (2018) Smile detection in the wild based on transfer learning. In 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018), Vol. , pp. 679–686. External Links: Document, ISSN Cited by: §3.2.
  • X. Guo, B. Zhu, L. F. Polanía, C. Boncelet, and K. E. Barner (2018) Group-level emotion recognition using hybrid deep models based on faces, scenes, skeletons and visual attentions. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, ICMI ’18, New York, NY, USA, pp. 635–639. External Links: ISBN 978-1-4503-5692-3, Link, Document Cited by: §3.1.
  • Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao (2016) MS-celeb-1m: a dataset and benchmark for large-scale face recognition. Lecture Notes in Computer Science, pp. 87–102. External Links: ISBN 9783319464879, ISSN 1611-3349, Link, Document Cited by: §3.2.
  • M. A. Hogg (1993) Group cohesiveness: a critical review and some new directions. European Review of Social Psychology 4 (1), pp. 85–111. External Links: Document, Link, https://doi.org/10.1080/14792779343000031 Cited by: §1.
  • G. Huang, Z. Liu, and K. Q. Weinberger (2016) Densely connected convolutional networks. CoRR abs/1608.06993. External Links: Link, 1608.06993 Cited by: §3.1.
  • R. Kosti, J. M. Alvarez, A. Recasens, and A. Lapedriza (2017) Emotion recognition in context. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.1.
  • A. Miller and W. Keith Edwards (2007) Give and take: a study of consumer photo-sharing culture and practice. pp. 347–356. External Links: Document Cited by: §1.
  • O. M. Parkhi, A. Vedaldi, and A. Zisserman (2015) Deep face recognition. In BMVC, Cited by: §3.2.
  • E. Salas, R. Grossman, A. Hughes, and C. Coultas (2015) Measuring team cohesion. Human factors 57, pp. 365–74. External Links: Document Cited by: §1.
  • M. Tan and Q. Le (2019)

    EfficientNet: rethinking model scaling for convolutional neural networks

    .
    In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 6105–6114. External Links: Link Cited by: §3.3.
  • S. Tomas, J. Hanbyul, M. Iain, and S. Yaser (2017) Hand keypoint detection in single images using multiview bootstrapping. In CVPR, Cited by: §3.3.
  • V. Vapnik and A. Lerner (1963) Pattern recognition using generalized portrait computer vision and pattern recognition method. Automation and Remote Control 24, pp. . Cited by: §3.
  • S. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh (2016) Convolutional pose machines. In CVPR, Cited by: §3.3.
  • Yuanjun Xiong, Kai Zhu, Dahua Lin, and X. Tang (2015) Recognize complex events from static images by fusing deep channels. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. , pp. 1600–1609. External Links: Document, ISSN 1063-6919 Cited by: §3.1.
  • K. Zhang, Z. Zhang, Z. Li, and Y. Qiao (2016) Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters 23 (10), pp. 1499–1503. External Links: ISSN 1558-2361, Link, Document Cited by: §3.2.