The Impact of Quantity of Training Data on Recognition of Eating Gestures

12/11/2018
by   Yiru Shen, et al.
0

This paper considers the problem of recognizing eating gestures by tracking wrist motion. Eating gestures can have large variability in motion depending on the subject, utensil, and type of food or beverage being consumed. Previous works have shown viable proofs-of-concept of recognizing eating gestures in laboratory settings with small numbers of subjects and food types, but it is unclear how well these methods would work if tested on a larger population in natural settings. As more subjects, locations and foods are tested, a larger amount of motion variability could cause a decrease in recognition accuracy. To explore this issue, this paper describes the collection and annotation of 51,614 eating gestures taken by 269 subjects eating a meal in a cafeteria. Experiments are described that explore the complexity of hidden Markov models (HMMs) and the amount of training data needed to adequately capture the motion variability across this large data set. Results found that HMMs needed a complexity of 13 states and 5 Gaussians to reach a plateau in accuracy, signifying that a minimum of 65 samples per gesture type are needed. Results also found that 500 training samples per gesture type were needed to identify the point of diminishing returns in recognition accuracy. Overall, the findings provide evidence that the size a data set typically used to demonstrate a laboratory proofs-of-concept may not be sufficiently large enough to capture all the motion variability that could be expected in transitioning to deployment with a larger population. Our data set, which is 1-2 orders of magnitude larger than all data sets tested in previous works, is being made publicly available.

READ FULL TEXT

page 1

page 2

page 3

research
07/17/2019

putEMG -- a surface electromyography hand gesture recognition dataset

In this paper, we present a putEMG dataset intended for evaluation of ha...
research
10/25/2017

High Five: Improving Gesture Recognition by Embracing Uncertainty

Sensors on mobile devices---accelerometers, gyroscopes, pressure meters,...
research
09/24/2022

Statistical Analysis of Time-Frequency Features Based On Multivariate Synchrosqueezing Transform for Hand Gesture Classification

In this study, the four joint time-frequency (TF) moments; mean, varianc...
research
04/24/2018

Spatiotemporal Learning of Dynamic Gestures from 3D Point Cloud Data

In this paper, we demonstrate an end-to-end spatiotemporal gesture learn...
research
02/22/2022

Statistical and Spatio-temporal Hand Gesture Features for Sign Language Recognition using the Leap Motion Sensor

In modern society, people should not be identified based on their disabi...
research
04/20/2020

CatNet: Class Incremental 3D ConvNets for Lifelong Egocentric Gesture Recognition

Egocentric gestures are the most natural form of communication for human...
research
06/19/2023

sEMG-based Hand Gesture Recognition with Deep Learning

Hand gesture recognition based on surface electromyographic (sEMG) signa...

Please sign up or login with your details

Forgot password? Click here to reset