WEMAC: Women and Emotion Multi-modal Affective Computing dataset

03/01/2022
by   Jose A. Miranda, et al.
0

Among the seventeen Sustainable Development Goals (SDGs) proposed within the 2030 Agenda and adopted by all the United Nations member states, the Fifth SDG is a call for action to turn Gender Equality into a fundamental human right and an essential foundation for a better world. It includes the eradication of all types of violence against women. Within this context, the UC3M4Safety research team aims to develop Bindi. This is a cyber-physical system which includes embedded Artificial Intelligence algorithms, for user real-time monitoring towards the detection of affective states, with the ultimate goal of achieving the early detection of risk situations for women. On this basis, we make use of wearable affective computing including smart sensors, data encryption for secure and accurate collection of presumed crime evidence, as well as the remote connection to protecting agents. Towards the development of such system, the recordings of different laboratory and into-the-wild datasets are in process. These are contained within the UC3M4Safety Database. Thus, this paper presents and details the first release of WEMAC, a novel multi-modal dataset, which comprises a laboratory-based experiment for 47 women volunteers that were exposed to validated audio-visual stimuli to induce real emotions by using a virtual reality headset while physiological, speech signals and self-reports were acquired and collected. We believe this dataset will serve and assist research on multi-modal affective computing using physiological and speech information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2022

ADARP: A Multi Modal Dataset for Stress and Alcohol Relapse Quantification in Real Life Setting

Stress detection and classification from wearable sensor data is an emer...
research
04/25/2018

Multi-modal Approach for Affective Computing

Throughout the past decade, many studies have classified human emotions ...
research
02/20/2020

MODMA dataset: a Multi-modal Open Dataset for Mental-disorder Analysis

According to the World Health Organization, the number of mental disorde...
research
05/18/2020

Building BROOK: A Multi-modal and Facial Video Database for Human-Vehicle Interaction Research

With the growing popularity of Autonomous Vehicles, more opportunities h...
research
03/13/2021

EgoCom: A Multi-person Multi-modal Egocentric Communications Dataset

Multi-modal datasets in artificial intelligence (AI) often capture a thi...
research
04/14/2022

Early Myocardial Infarction Detection with One-Class Classification over Multi-view Echocardiography

Myocardial infarction (MI) is the leading cause of mortality and morbidi...
research
09/05/2016

Vision-based Engagement Detection in Virtual Reality

User engagement modeling for manipulating actions in vision-based interf...

Please sign up or login with your details

Forgot password? Click here to reset