CatNet: Class Incremental 3D ConvNets for Lifelong Egocentric Gesture Recognition

04/20/2020
by   Zhengwei Wang, et al.
0

Egocentric gestures are the most natural form of communication for humans to interact with wearable devices such as VR/AR helmets and glasses. A major issue in such scenarios for real-world applications is that may easily become necessary to add new gestures to the system e.g., a proper VR system should allow users to customize gestures incrementally. Traditional deep learning methods require storing all previous class samples in the system and training the model again from scratch by incorporating previous samples and new samples, which costs humongous memory and significantly increases computation over time. In this work, we demonstrate a lifelong 3D convolutional framework – c(C)la(a)ss increment(t)al net(Net)work (CatNet), which considers temporal information in videos and enables lifelong learning for egocentric gesture video recognition by learning the feature representation of an exemplar set selected from previous class samples. Importantly, we propose a two-stream CatNet, which deploys RGB and depth modalities to train two separate networks. We evaluate CatNets on a publicly available dataset – EgoGesture dataset, and show that CatNets can learn many classes incrementally over a long period of time. Results also demonstrate that the two-stream architecture achieves the best performance on both joint training and class incremental training compared to 3 other one-stream architectures. The codes and pre-trained models used in this work are provided at https://github.com/villawang/CatNet.

READ FULL TEXT

page 2

page 6

page 8

research
09/18/2019

Simultaneous Segmentation and Recognition: Towards more accurate Ego Gesture Recognition

Ego hand gestures can be used as an interface in AR and VR environments....
research
08/16/2018

Egocentric Gesture Recognition for Head-Mounted AR devices

Natural interaction with virtual objects in AR/VR environments makes for...
research
04/13/2023

Continual Learning of Hand Gestures for Human-Robot Interaction

In this paper, we present an efficient method to incrementally learn to ...
research
11/23/2016

iCaRL: Incremental Classifier and Representation Learning

A major open problem on the road to artificial intelligence is the devel...
research
04/19/2019

GestARLite: An On-Device Pointing Finger Based Gestural Interface for Smartphones and Video See-Through Head-Mounts

Hand gestures form an intuitive means of interaction in Mixed Reality (M...
research
02/19/2018

Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling

We propose a fully automatic method for learning gestures on big touch d...
research
12/11/2018

The Impact of Quantity of Training Data on Recognition of Eating Gestures

This paper considers the problem of recognizing eating gestures by track...

Please sign up or login with your details

Forgot password? Click here to reset