Unsupervised Multi-Modal Representation Learning for Affective Computing with Multi-Corpus Wearable Data

08/24/2020
by   Kyle Ross, et al.
0

With recent developments in smart technologies, there has been a growing focus on the use of artificial intelligence and machine learning for affective computing to further enhance the user experience through emotion recognition. Typically, machine learning models used for affective computing are trained using manually extracted features from biological signals. Such features may not generalize well for large datasets and may be sub-optimal in capturing the information from the raw input data. One approach to address this issue is to use fully supervised deep learning methods to learn latent representations of the biosignals. However, this method requires human supervision to label the data, which may be unavailable or difficult to obtain. In this work we propose an unsupervised framework reduce the reliance on human supervision. The proposed framework utilizes two stacked convolutional autoencoders to learn latent representations from wearable electrocardiogram (ECG) and electrodermal activity (EDA) signals. These representations are utilized within a random forest model for binary arousal classification. This approach reduces human supervision and enables the aggregation of datasets allowing for higher generalizability. To validate this framework, an aggregated dataset comprised of the AMIGOS, ASCERTAIN, CLEAS, and MAHNOB-HCI datasets is created. The results of our proposed method are compared with using convolutional neural networks, as well as methods that employ manual extraction of hand-crafted features. The methodology used for fusing the two modalities is also investigated. Lastly, we show that our method outperforms current state-of-the-art results that have performed arousal detection on the same datasets using ECG and EDA biosignals. The results show the wide-spread applicability for stacked convolutional autoencoders to be used with machine learning for affective computing.

READ FULL TEXT
research
08/04/2021

Attentive Cross-modal Connections for Deep Multimodal Wearable-based Emotion Recognition

Classification of human emotions can play an essential role in the desig...
research
02/04/2020

Self-supervised ECG Representation Learning for Emotion Recognition

We present a self-supervised deep multi-task learning framework for elec...
research
10/25/2018

HAR-Net:Fusing Deep Representation and Hand-crafted Features for Human Activity Recognition

Wearable computing and context awareness are the focuses of study in the...
research
12/10/2019

End-to-end facial and physiological model for Affective Computing and applications

In recent years, Affective Computing and its applications have become a ...
research
05/16/2019

Utilizing Deep Learning Towards Multi-modal Bio-sensing and Vision-based Affective Computing

In recent years, the use of bio-sensing signals such as electroencephalo...
research
10/06/2015

Learning Deep Representations of Appearance and Motion for Anomalous Event Detection

We present a novel unsupervised deep learning framework for anomalous ev...
research
10/12/2016

Detecting Unseen Falls from Wearable Devices using Channel-wise Ensemble of Autoencoders

A fall is an abnormal activity that occurs rarely, so it is hard to coll...

Please sign up or login with your details

Forgot password? Click here to reset