Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning

01/26/2023
by   Linfeng Xu, et al.
0

With the rapid development of wearable cameras, a massive collection of egocentric video for first-person visual perception becomes available. Using egocentric videos to predict first-person activity faces many challenges, including limited field of view, occlusions, and unstable motions. Observing that sensor data from wearable devices facilitates human activity recognition, multi-modal activity recognition is attracting increasing attention. However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition. Nowadays, deep learning in real world has led to a focus on continual learning that often suffers from catastrophic forgetting. But the catastrophic forgetting problem for egocentric activity recognition, especially in the context of multiple modalities, remains unexplored due to unavailability of dataset. In order to assist this research, we present a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL, which is collected by self-developed glasses integrating a first-person camera and wearable sensors. It contains synchronized data of videos, accelerometers, and gyroscopes, for 32 types of daily activities, performed by 10 participants. Its class types and scale are compared with other publicly available datasets. The statistical analysis of the sensor data is given to show the auxiliary effects for different behaviors. And results of egocentric activity recognition are reported when using separately, and jointly, three modalities: RGB, acceleration, and gyroscope, on a base network architecture. To explore the catastrophic forgetting in continual learning tasks, four baseline methods are extensively evaluated with different multi-modal combinations. We hope the UESTC-MMEA-CL can promote future studies on continual learning for first-person activity recognition in wearable applications.

READ FULL TEXT

page 1

page 4

page 8

research
04/19/2021

Continual Learning in Sensor-based Human Activity Recognition: an Empirical Benchmark Analysis

Sensor-based human activity recognition (HAR), i.e., the ability to disc...
research
08/13/2019

MEx: Multi-modal Exercises Dataset for Human Activity Recognition

MEx: Multi-modal Exercises Dataset is a multi-sensor, multi-modal datase...
research
03/14/2022

Continual Learning for Multivariate Time Series Tasks with Variable Input Dimensions

We consider a sequence of related multivariate time series learning task...
research
07/02/2018

Multi-modal Egocentric Activity Recognition using Audio-Visual Features

Egocentric activity recognition in first-person videos has an increasing...
research
03/11/2022

Lifelong Adaptive Machine Learning for Sensor-based Human Activity Recognition Using Prototypical Networks

Continual learning, also known as lifelong learning, is an emerging rese...
research
07/14/2020

Attend And Discriminate: Beyond the State-of-the-Art for Human Activity Recognition using Wearable Sensors

Wearables are fundamental to improving our understanding of human activi...
research
12/04/2019

Template co-updating in multi-modal human activity recognition systems

Multi-modal systems are quite common in the context of human activity re...

Please sign up or login with your details

Forgot password? Click here to reset