Multi-modal Approach for Affective Computing

04/25/2018
by   Siddharth Siddharth, et al.
0

Throughout the past decade, many studies have classified human emotions using only a single sensing modality such as face video, electroencephalogram (EEG), electrocardiogram (ECG), galvanic skin response (GSR), etc. The results of these studies are constrained by the limitations of these modalities such as the absence of physiological biomarkers in the face-video analysis, poor spatial resolution in EEG, poor temporal resolution of the GSR etc. Scant research has been conducted to compare the merits of these modalities and understand how to best use them individually and jointly. Using multi-modal AMIGOS dataset, this study compares the performance of human emotion classification using multiple computational approaches applied to face videos and various bio-sensing modalities. Using a novel method for compensating physiological baseline we show an increase in the classification accuracy of various approaches that we use. Finally, we present a multi-modal emotion-classification approach in the domain of affective computing research.

READ FULL TEXT
research
05/16/2019

Utilizing Deep Learning Towards Multi-modal Bio-sensing and Vision-based Affective Computing

In recent years, the use of bio-sensing signals such as electroencephalo...
research
08/11/2018

SmartEAR: Smartwatch-based Unsupervised Learning for Multi-modal Signal Analysis in Opportunistic Sensing Framework

Wrist-bands such as smartwatches have become an unobtrusive interface fo...
research
03/01/2022

WEMAC: Women and Emotion Multi-modal Affective Computing dataset

Among the seventeen Sustainable Development Goals (SDGs) proposed within...
research
12/10/2019

End-to-end facial and physiological model for Affective Computing and applications

In recent years, Affective Computing and its applications have become a ...
research
05/01/2019

Attention Monitoring and Hazard Assessment with Bio-Sensing and Vision: Empirical Analysis Utilizing CNNs on the KITTI Dataset

Assessing the driver's attention and detecting various hazardous and non...
research
05/18/2020

Building BROOK: A Multi-modal and Facial Video Database for Human-Vehicle Interaction Research

With the growing popularity of Autonomous Vehicles, more opportunities h...
research
10/27/2017

Multi-modal Aggregation for Video Classification

In this paper, we present a solution to Large-Scale Video Classification...

Please sign up or login with your details

Forgot password? Click here to reset