EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition

04/29/2020
by   Jun-Ho Choi, et al.
0

Human activity recognition using multiple sensors is a challenging but promising task in recent decades. In this paper, we propose a deep multimodal fusion model for activity recognition based on the recently proposed feature fusion architecture named EmbraceNet. Our model processes each sensor data independently, combines the features with the EmbraceNet architecture, and post-processes the fused feature to predict the activity. In addition, we propose additional processes to boost the performance of our model. We submit the results obtained from our proposed model to the SHL recognition challenge with the team name "Yonsei-MCML."

READ FULL TEXT
research
10/04/2018

Activity Recognition using Hierarchical Hidden Markov Models on Streaming Sensor Data

Activity recognition from sensor data deals with various challenges, suc...
research
01/25/2016

Egocentric Activity Recognition with Multimodal Fisher Vector

With the increasing availability of wearable devices, research on egocen...
research
03/08/2023

Robust Multimodal Fusion for Human Activity Recognition

The proliferation of IoT and mobile devices equipped with heterogeneous ...
research
08/15/2022

Self-Supervised Multimodal Fusion Transformer for Passive Activity Recognition

The pervasiveness of Wi-Fi signals provides significant opportunities fo...
research
02/04/2017

Probabilistic Sensor Fusion for Ambient Assisted Living

There is a widely-accepted need to revise current forms of health-care p...
research
07/03/2017

Structure Optimization for Deep Multimodal Fusion Networks using Graph-Induced Kernels

A popular testbed for deep learning has been multimodal recognition of h...
research
05/14/2019

Disparity-Augmented Trajectories for Human Activity Recognition

Numerous methods for human activity recognition have been proposed in th...

Please sign up or login with your details

Forgot password? Click here to reset