Utilizing Deep Learning Towards Multi-modal Bio-sensing and Vision-based Affective Computing

05/16/2019
by   Siddharth Siddharth, et al.
0

In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based research problems such as object detection. Yet, these advances in deep-learning have not adequately translated into bio-sensing research. This work applies novel deep-learning-based methods to various bio-sensing and video data of four publicly available multi-modal emotion datasets. For each dataset, we first individually evaluate the emotion-classification performance obtained by each modality. We then evaluate the performance obtained by fusing the features from these modalities. We show that our algorithms outperform the results reported by other studies for emotion/valence/arousal/liking classification on DEAP and MAHNOB-HCI datasets and set up benchmarks for the newer AMIGOS and DREAMER datasets. We also evaluate the performance of our algorithms by combining the datasets and by using transfer learning to show that the proposed method overcomes the inconsistencies between the datasets. Hence, we do a thorough analysis of multi-modal affective data from more than 120 subjects and 2,800 trials. Finally, utilizing a convolution-deconvolution network, we propose a new technique towards identifying salient brain regions corresponding to various affective states.

READ FULL TEXT

page 1

page 5

page 6

page 10

page 12

research
04/25/2018

Multi-modal Approach for Affective Computing

Throughout the past decade, many studies have classified human emotions ...
research
09/30/2019

On Assessing Driver Awareness of Situational Criticalities: Multi-modal Bio-sensing and Vision-based Analysis, Evaluations and Insights

Automobiles for our roadways are increasingly utilizing advanced driver ...
research
01/26/2023

Real-Time Digital Twins: Vision and Research Directions for 6G and Beyond

This article presents a vision where real-time digital twins of the phys...
research
06/24/2022

Multi-modal Sensor Data Fusion for In-situ Classification of Animal Behavior Using Accelerometry and GNSS Data

We examine using data from multiple sensing modes, i.e., accelerometry a...
research
02/16/2023

NUAA-QMUL-AIIT at Memotion 3: Multi-modal Fusion with Squeeze-and-Excitation for Internet Meme Emotion Analysis

This paper describes the participation of our NUAA-QMUL-AIIT team in the...
research
08/24/2020

Unsupervised Multi-Modal Representation Learning for Affective Computing with Multi-Corpus Wearable Data

With recent developments in smart technologies, there has been a growing...
research
05/01/2019

Attention Monitoring and Hazard Assessment with Bio-Sensing and Vision: Empirical Analysis Utilizing CNNs on the KITTI Dataset

Assessing the driver's attention and detecting various hazardous and non...

Please sign up or login with your details

Forgot password? Click here to reset