Detection of bimanual gestures everywhere: why it matters, what we need and what is missing

07/09/2017 ∙ by Divya Shah, et al. ∙ Università di Genova 0

Bimanual gestures are of the utmost importance for the study of motor coordination in humans and in everyday activities. A reliable detection of bimanual gestures in unconstrained environments is fundamental for their clinical study and to assess common activities of daily living. This paper investigates techniques for a reliable, unconstrained detection and classification of bimanual gestures. It assumes the availability of inertial data originating from the two hands/arms, builds upon a previously developed technique for gesture modelling based on Gaussian Mixture Modelling (GMM) and Gaussian Mixture Regression (GMR), and compares different modelling and classification techniques, which are based on a number of assumptions inspired by literature about how bimanual gestures are represented and modelled in the brain. Experiments show results related to 5 everyday bimanual activities, which have been selected on the basis of three main parameters: (not) constraining the two hands by a physical tool, (not) requiring a specific sequence of single-hand gestures, being recursive (or not). In the best performing combination of modeling approach and classification technique, five out of five activities are recognized up to an accuracy of 97 82

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 21

page 22

page 25

page 26

page 27

page 28

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Bimanual gestures are central to everyday life, and constitute a fundamental ground for the study of basic principles of human behaviour. Traditionally, the study of bimanual gestures in humans focus on very simple motions involving fingers and hands, and including coordination, symmetry, in-phase and anti-phase behaviours. These studies are aimed at understanding the dynamics associated with bimanual movements and target aspects of motor control, as exemplified by the Haken-Kelso-Bunz (HKB) model for self-organisation in human movement pattern formation Kelso1984 Hakenetal1985 .

In this paper, we are interested in determining how the study of bimanual gestures can lead to automated systems for their detection and classification in unconstrained, everyday environments. In the context of assistive systems for fragile populations, including elderly, people with disabilities and other people with special needs, the need arises to provide caregivers, medical staff or simply relatives with a tool able to assess the ability of assisted people to perform bimanual gestures in their natural environment. Such approach is in line with the ageing in place paradigm, a recent healthcare position which acknowledges and focuses on the role that a person’s surroundings (the home, the neighbourhood) play for his well-being in older ages Wiles11 . A familiar environment brings about a sense of security, independence and autonomy which has a positive impact on routines and activities, and ultimately on quality of life.

There is a big gap between clinical studies involving the coordination of finger movements and the recognition of such general-purpose bimanual gestures as opening and closing curtains, sweeping the floor, or filling a cup with water. However, the first step to take is to determine how current understanding of bimanual movements and their representation in the brain can lead to better detection and classification systems in real-world environments. Three factors must be considered when designing such a system.

Factor 1. It is debated whether bimanual gestures are controlled in intrinsic or extrinsic coordinates, or rather multiple coordination strategies co-occur.

Bimanual movements tend to motion symmetry and stabilisation Kelso1984

. This has been typically explained using co-activation of homologous muscles in neuronal motor structures, due to bilateral cross-talk, suggesting that bimanual coordination is mostly done using intrinsic (i.e., proprioceptive) coordinates. Mechsner

et al suggest that, instead, such a coordination is due to spatial, perceptual symmetry only, i.e., using extrinsic (i.e., exteroceptive) visual coordinates Mechsneretal2001 . If this were true, there would be no need to map visual representations to motor representations (and viceversa), and voluntary movements could be organised on the basis of perceptual goals. The role of different coordinates and their interplay for bimanual coordination mechanisms has been studied by Sakurada and colleagues Sakuradaetal2015 . Starting from studies relating temporal and spatial couplings in bilateral motions, including the adaptation exhibited by two hands having to perform motions of different speed (i.e., the fastest becoming slightly slower and viceversa) Heueretal1998 , and the fact that the movement of a non-dominant hand is likely to be assimilated by the dominant one Byblowetal2000 , they demonstrate a relatively stronger contribution of intrinsic components in bimanual coordination, although both components are flexibly regulated according to the specific bimanual task. Furthermore, they argue that the central nervous system regulates bilateral coordination at different levels, as hypothesised by Swinnen and Wenderoth SwinnenandWenderoth2004 . The importance of both intrinsic and extrinsic coordinates seems to be confirmed by recent studies in interpersonal coordination Kodamaetal2015 . It is suggested that coordinated motion is informed by a full perception-action coupling, including visual and haptic sensorimotor loops, which propagates to the neuromuscular system.

We derive two requirements for our analysis:

  • we must consider models agnostic with respect to an explicit coordination at the motor level between the two hands/arms;

  • classification techniques must be robust to variation in speed, both for the bimanual gesture as a whole and for the single hand/arm.

Factor 2. Ageing affects the way we move, and therefore coordination in bimanual gestures varies over time.

According to the dedifferentiation paradigm, ageing is now considered a parallel and distributed process occurring at various levels in the human’s body. The dedifferentiation can be defined as “the process by which structures, mechanisms of behaviour that were specialised for a given function lose their specialisation and become simplified, less distinct or common to different functions” BaltesandLindenberger1997 . As a consequence, ageing affects not only individual body subsystems (i.e., the muscular system or the brain), but also their interactions. It is argued by Sleimen-Malkoun and colleagues that such a process can lead to common and intertwined causes for cognitive ageing, i.e., a general slowing down of information processing, including the information related to procedural memory and – therefore – movement and coordination SleimenMalkounetal2014 . It is posited that the ageing brain undergoes anatomical and physiological changes, for the reorganising activation patterns between neural circuits. As far as motor task complexity is concerned, a generalised increased activation of brain areas is even more evident, which reflects a greater involvement of processes related to executive control.

Also in this case, we derive an important requirement:

  • we must consider models which can be adapted over time and which follow the evolution of the musculoskeletal system, at least implicitly, thus requiring the use of forms of machine learning techniques.

Factor 3. Different mental representations of sensorimotor loops and action, involving discrete and continuous organisation principles, are under debate.

Beside the models aimed at representing bimanual gestures assuming a motor control framework, much work has been carried out in the past few years to devise building blocks for mental action representation Kelso1984 Hakenetal1985 Mechsneretal2001 SwinnenandWenderoth2004 Sakuradaetal2015 Kodamaetal2015 . Assuming a goal-directed cognitive perspective, it has been shown how movements can be represented as a serial and functional combination of goal-related body postures, or goal postures (i.e., key frames), as well as their transitional states. Furthermore, it has been posited that movements can be expressed as incremental changes between goal postures, which reduces the amount of effortful attention needed for their execution Rosenbaumetal2007 . On these premises, Basic Action Concepts (BACs) have been proposed as building blocks of mental action representations. BACs represent chucked body postures related to common functions to realise goal-directed actions. Schack and colleagues posit that complex (including bimanual) actions are mentally represented as a combination of executed action and intended or observed effects Schacketal2014 . Furthermore, they argue that the map linking motion and perceptual effects is bi-directional and stored hierarchically in long-term memory, in structures resembling dendograms Schack2012 . This is a specific case of what has been defined by Bernstein as the degrees of freedom problem Bernstein1967

. The problem is related to how the various parts of the motor system can become harnessed so as to generate coordinated behaviour when needed. As Bernstein theorised, a key role is played by muscular-articular links (i.e., synergies) in constraining how many degrees of freedom lead to dexterous behaviour. Harrison and Stergiou argue that dexterity and motion robustness are enabled by multi-functional (degenerate) body parts able to assume context-dependent roles. As a consequence, task-specific human-environment interactions can flexibly generate adaptable motor solutions.

We derive two requirements:

  • although motion models are intrinsically continuous, we need to derive a discrete representation able to provide action labels which, in principle, can lead to more complex organisations;

  • models must capture dexterity in everyday environments and robustness to different executions of the same gesture, which leads to models obtained by human demonstration.

On the basis of these requirements, we propose a bimanual wearable system able to detect and classify bimanual gestures using the inertial information provided by two wrist-mounted smartwatches. The system builds up and significantly extends previous work

Bruno12 , and adheres to the wearable sensing paradigm, which envisions the use of sensors located on a person’s body, either with wearable devices such as smartwatches, or with purposely engineered articles of clothing Lara13 , to determine a number of important parameters, in our case motion. Since sensors are physically carried around, the monitoring activity can virtually occur in any place and it is usually focused on the detection of movements and gestures.

The contribution is four-fold: (i) an analysis of two procedures for modelling bimanual gestures, respectively explicitly and implicitly taking the correlation between the two hands/arms into account; (ii) an analysis of two procedures for the classification of run-time data, respectively relying on the probability measure and the Mahalanobis distance to compute the similarity between run-time data and previously stored models of bimanual gestures; (iii) a performance assessment of the developed techniques with the standard statistical metrics of accuracy, precision and recall over the collected dataset, as well as under real-life conditions; (iv) a dataset of

recordings of five bimanual gestures, performed by ten volunteers, to support reproducible research.

The paper is structured as follows. Section 2 describes the theoretical background of the proposed modelling and recognition procedures, as well as related work, in view of the requirements outlined above. Section 3 provides a thorough description of the system’s architecture and insights on the five bimanual gestures considered for the analysis; the performance of such system are presented and discussed in Section 4. Conclusions follow.

2 Related Work

Wearable systems for the automatic recognition of human gestures and full-body movements typically rely on inertial information. Accelerometers prove to be the most informative sensor for the task Lester05 . To comply with end users’ constraints related to the impact of the monitoring system on their appearance and freedom of motion, most solutions adopt a single sensing device, either located at the waist Mathie04 or, as it is becoming more and more common, at the wrist Dietrich14 .

Due to the similarities in the input data and in the operating conditions, most systems adopt a similar architecture Lara13 , sketched in Figure 1. The architecture identifies two stages, namely a training phase (on the left hand side in the Figure) and a testing phase (on the right hand side). The training phase, typically executed off-line without strict computational constraints, is devoted to the creation of a compact representation of a gesture/movement of interest, on the basis of an informative set of examples. This also complies with requirements and above. The testing phase, which may be subject to real-time and computational constraints, is responsible for the analysis of an input data stream to detect the gesture, among the modelled ones, which more closely matches it, if any, and label it accordingly. Please note that the word “testing” is used here with respect to the data stream to analyse, with no reference to the stage of development of the monitoring system. Specifically, we denote with the term “validation” the development stage in which we assess the performance of the system, and with the term “deployment” the stage in which the system is actually adopted by end users. During validation, the testing phase executes an off-line analysis of labelled gesture recordings, while during deployment the testing phase executes an on-line analysis of a continuous data stream, in unsupervised conditions.

Figure 1: The typical architecture of wearable sensing systems for the recognition of human gestures. The left hand side lists the tasks of the training phase, while the right hand side lists the tasks executed during the testing phase.

During the training phase (see Figure 1 on the left hand side), it is first necessary to acquire and build the training set of measured attributes for the gestures of interest. Two approaches are possible. The specialised approach envisions the creation of a training set exclusively composed of gesture recordings performed by the person to be monitored during the deployment stage. This approach maximises the recognition accuracy at the expenses of a long setup for each new installation. However, it enforces requirements and , in that it allows someone to periodically retrain the system if necessary. Conversely, the generalised approach envisions the creation of a training set composed of a large number of gesture recordings, performed by a number of volunteers (not necessarily including the person to monitor). Using gestures provided by different individuals maximises the likelihood that the model is able to capture a much varied dexterity, as posited by . This approach, albeit more prone to errors, greatly reduces the setup costs and is to be preferred in the case of Ambient Assisted Living applications, in which the perceived system’s ease of use is crucial for its success Bruno12 ; Bulling14 .

Once the training set is available, it is typically filtered for noise reduction and/or formatted for later processing (data pre-processing stage). Then, the purpose of the feature extraction procedure is to determine relevant information (in the form of features) from raw signals. Features are expected to (i) enhance the differences between gestures while being invariant when extracted from data patterns corresponding to the same gesture, (ii) lead to a compact representation of the gesture and (iii) require limited computational time and resources for the extraction and processing, since these operations are subject to real-time constraints during deployment. Literature discriminates between statisticalfeatures, extracted using methods such as the Fourier and the Wavelet transform on the basis of quantitative characteristics of the data, and structural features, aiming at enhancing the interrelationship among data Lara13 . Most human activity recognition systems based on inertial measurements make use of statistical features, usually defined in the time- or frequency-domain Krassnig10

. Alternative feature extraction methods include the use of Principal Component Analysis

Mashita12

and autoregressive models

Lee11 . With respect to time-domain features exclusively, which minimise the computational load introduced by the feature extraction procedure, gravity and body acceleration are among the most commonly adopted Karantonis06 ; Krassnig10 ; Bruno12 . Discriminating between gravity and body acceleration is a non-trivial operation, for the two features overlap both in the time and frequency domains. Two approaches are typically adopted to separate them. The former exploits additional sensors, such as gyroscopes Chul09 or magnetometers Bonnet07 , to compute the orientation or attitude of another body part, usually the torso. The latter exploits the known features of gravity and uses either low-pass filters to isolate the gravitational component Karantonis06 ; Bruno12 , or high-pass filters to isolate the body acceleration components Sharma08 .

Once gravity and body acceleration components are isolated, the need arises to model them as features. Six different 1-dimensional sampled signals are available, i.e., the three gravity and the three body acceleration components along the , and axes. Again, two possibilities are discussed in the Literature. The first is to assume the signals to be pairwise uncorrelated, which yields six separate 2-dimensional features, each feature being composed of timing information and the corresponding signal value on a given axis, i.e., and , where . The second is to assume the , and components of gravity and body acceleration to be correlated, which yields two separate 4-dimensional features, i.e., , , each feature being composed of timing information and the corresponding signal values on all axes. The explicit use of correlation among tri-axial acceleration data has been proved to lead to better results in terms of classification rate and computational time Cho08 ; Krassnig10 ; Bruno12 . It is noteworthy that, in case of bimanual gestures, it is possible to explicitly model the correlation among inertial data originating from the two hands/arms, or to consider them as separate signals. In this way, we can comply with requirements and (in part) above.

Finally, the modelling procedure is devoted to the creation of a compact and informative representation of the considered gestures in terms of available sensory data. Two classes of approaches have been traditionally pursued. In logic-based

solutions each gesture to monitor and recognise is encoded through sound and well-defined rules, which are based on ranges of admissible values for a set of relevant parameters. Recognition is carried out by analysing run-time sensory values to progressively converge towards the encoded gesture more closely matching run-time data. Decision trees, which allow for a fast and simple classification procedure, are the most adopted solution in logic-based approaches

Lee02 ; Mathie04 ; Karantonis06 ; Krassnig10 . Probability-based

solutions assume instead each gesture to be represented by a model encoding relevant moments of the training set, and to be identified using non ambiguous labels. In this case, recognition is typically performed by comparing run-time sensory data with the stored models through probabilistic distance measures. Commonly adopted techniques include Neural Networks

Krassnig10

, Hidden Markov Models

Minnen05 ; OlguinOlguin06 ; Choudhury08 and Gaussian Mixture Models Bruno12 . In our work, we exploit probability-based models to comply with requirement .

During the testing phase (see Figure 1 on the right hand side), analogously to what happens in the training phase, a number of steps are sequentially executed. The feature extraction step executes the same algorithms of the training phase. Once the testing stream has been processed (typically focusing on a time window), it is possible to evaluate its features against the previously trained models (recognition), generating a predicted label. In the testing phase, we exploit specific distance metrics relating the stored models with the run-time data stream. In this way, we can account for requirement .

Most wearable sensing systems based on a single sensing point (e.g., the right wrist) focus on the recognition of gestures which are either one-handed, e.g., bringing a cigarette to the lips to smoke Dietrich14 , or, albeit involving both hands, such as cutting meat with fork and knife Bruno13 or even the full-body, e.g., climbing stairs Bruno12 , correspond to a unique and generalised pattern at the considered sensing point. The presented work relaxes this assumption, by evaluating a wearable sensing system based on two sensing points (the left and right wrists) which allows for the modelling and recognition of generic bimanual gestures.

3 System Architecture

Figure 2: System architecture (training phase).

Figure 2 shows a schematic view of the training phase of the proposed system, while Figure 5 details the operations performed during the testing phase. The blocks devoted to data synchronisation and data pre-processing, as well as the feature extraction block, are the same in the two phases. We consider two approaches for the modelling stage: (i) explicit modelling of the correlation of the two hands ( approach, see Figure 2 on the left hand side), presupposing the stress on intrinsic coordinates in motor control studies, and (ii) implicit modelling of the correlation ( approach, see Figure 2 on the right hand side), which assumes the importance of extrinsic coordinates. We also consider two approaches for the comparison of testing data with the available models, respectively based on the probability measure and the Mahalanobis distance. The former takes into account the likelihood of a model as a whole, whereas the second weights more the contribution of body degeneracy in robustness and dexterity.

3.1 Pre-processing & Feature extraction

The proposed system relies on the inertial information collected by two tri-axial accelerometers, respectively located at a person’s left and right wrists. To properly manage the two data streams, they should be synchronised, and the data synchronisation procedure heavily depends on the choices made related to hardware solutions. As it will be described in Section 4, in the case of the devices we adopt, the synchronisation is guaranteed by the manufacturer.

All the trials of a gesture in the training set (i.e., all the couples of left- and right-wrist data streams associated with a single execution of the gesture) are initially synchronised with each other manually, so that the starting moment of the gesture is the same across all recordings, and trimmed to have equal length. Then, the data pre-processing stage filters each acceleration stream with a median filter of size to reduce noise.

Let us assume that we have different bimanual gestures. For each gesture to learn, where , let us assume that the training set includes trials and is one of them. After the pre-processing stage, all the trials are synchronised and truncated as to be composed of the same number of observations. A trial is defined as:

(1)

with:

(2)

where and denote, respectively, the acceleration streams provided by the sensing device on the left and on the right wrist, includes the time and and components of the gravity on the left acceleration stream, includes the time and and components of the body acceleration on the left acceleration stream, includes the time and and components of the gravity on the right acceleration stream and includes the time and and components of the body acceleration on the right acceleration stream. The feature extraction stage separates the and tri-axial acceleration streams provided by the sensing devices into their gravity and body acceleration components Bruno12 , by applying a low pass Chebyshev I order filter ().

3.2 Modelling

The goal of the modelling stage is to combine the trials in the training set to obtain a generalised version, i.e., a model, of gesture , defined in terms of the two features of gravity and body acceleration. Two approaches are possible.

In the explicit correlation modelling ( approach, see Figure 2 on the left hand side), we merge the left and right components of each trial to create -dimensional features, i.e.:

(3)

with:

(4)

and the model of gesture is then defined in terms of and .

Conversely, in the implicit correlation modelling ( approach, see Figure 2 on the right hand side), we keep the left and right hand streams independent, thus considering the four features defined in (2). The model of gesture is then defined in terms of and .

The first approach corresponds to assuming that the motion of the two hands is fully constrained by the performed gesture, while the latter approach leaves to later stages the responsibility of correlating the two data streams. At the same time, in the first case we assume the contribution of the left and right hands/arms as correlated, whereas in the second case we do not pose such an assumption. Albeit introducing an additional step, the possibility of tuning the correlation of the two data streams opens interesting scenarios for the recognition stage. Consider the gesture of rotating a tap’s handle with the left hand, which can occur in a number of situations (for example, when filling a glass with tap water, or when washing a dish): by varying the importance given to this hand we can either have a more flexible system, which is able to recognise many situations in light of the common traits in the left hand stream, or a more specialised one, which is focused on one situation only and is able to filter out all the others in light of the differences in the right hand stream.

The modelling procedure in itself is the same for both approaches and, in particular, it relies on Gaussian Mixture Modelling (GMM) and Gaussian Mixture Regression (GMR) for the retrieval of the expected curve and covariance matrix of each considered feature, on the basis of a given training set. The procedure has been first introduced in the field of Human-Robot Interaction (Calinon10, ) and later used for the purposes of Human Activity Recognition with a wrist-placed inertial device for a single arm Bruno12 . We point to the references for its thorough description.

We denote with the generic feature of interest, i.e., can either correspond to gravity or body acceleration in the approach, or to one among and in the approach. We assume the following definitions.

  • is the data point of feature of trial , defined as:

    (5)

    where stores the time information and includes the acceleration components. The dimension depends on the modelling approach.

  • , with , is the ordered set of data points generating the feature curve for all the trials, defined as:

    (6)

    where

    (7)

    is a generic data point taken from the whole training set, i.e., by hiding the information about the trial to which it belongs.

The purpose of GMM+GMR is to build the expected version of all features of gesture , i.e.:

(8)

with:

(9)

where is the conditional expectation of given and is the conditional covariance of given . The model of gesture is then defined as the set of the feature models . Please note that the number of data points in the expected curve may not be necessarily the same as the number of data points in the trials of the training set. The equality is imposed here for the clarity of the description.

(a) Left hand
(b) Right hand
Figure 3: Model of the gesture open a wardrobe in the implicit correlation approach (), which is defined in terms of: (i) left hand gravity component (a)-left, (ii) left hand body acceleration component (a)-right, (iii) right hand gravity component (b)-left, (iv) right hand body acceleration component (b)-right.
(a) , , axes
(b) , , axes
Figure 4: Model of the gesture open and close curtains in the explicit correlation approach (), which is defined in terms of: (i) gravity component (a,b)-left, (ii) body acceleration component (a,b)-right.

Figure 3 shows the projections of the model of the gesture open a wardrobe, computed from the full dataset of recordings with the approach. The four modelled features are (a)-left, (a)-right, (b)-left, (b)-right. Figure 4 shows the projections of the model of the gesture open and close curtains, computed from the full dataset of recordings with the approach. The two modelled features are (a,b)-left and (a,b)-right. In both cases, the solid red line represents the projection of the expected curve on one time-acceleration space, while the pink area surrounding it represents the conditional covariance .

The modelling procedure requires in input the number of Gaussian functions to use, which varies both with the gestures and with the features. The modelling parameters estimation

stage implements a procedure based on the k-means clustering algorithm and the silhouette clustering quality metric

Rousseeuw87

for the estimation of the number of Gaussian functions to use and their initialisation

Bruno12 . Other choices are equally legitimate. We again point to the references for the details of the procedure.

3.3 Comparison

Figure 5: System architecture (testing phase).

As it is shown in Figure 5, the testing phase executes the same procedures for data synchronisation, data pre-processing (noise reduction) and feature extraction of the training phase. Then, in accordance with the chosen modelling approach, the features are either expressed in the form of (2) or (4).

The recognition procedure is composed of a comparison stage, devoted to ranking the similarity between the testing data and each previously learned model, and a classification stage, responsible for the final labelling of the testing data on the basis of comparison results.

We propose two comparison procedures. The distance-based comparison computes the Mahalanobis distance between the testing features and each model features, while the probability-based comparison computes the likelihood of the occurrence of the testing features given each model features. In both cases, the classification stage identifies the model with most prominent distance/probability, if any, and labels the testing data accordingly. Both comparison techniques are applied for the two considered modelling approaches, thus yielding four combinations of training and testing procedures. The comparison of their performance is the topic of the experimental evaluation presented in Section 4.

Let us consider a moving horizon window of length on the testing streams and let us denote with the set of features extracted from the window in accordance with the chosen modelling approach.

In the distance-based approach, on the basis of the available models, we compute distances

. The Mahalanobis distance is a probabilistic distance measure used to compute the similarity between sets of random variables whose means and variances are known

Mahalanobis36 . The Mahalanobis distance between a generic element defined by (9) and a generic element defined in accordance with (7), is computed as:

(10)

The accumulated distance between and is computed as:

(11)

that is, by integrating the distance between each element in the run-time stream and its corresponding element in the model. The overall distance is computed as a weighted sum of all feature distances. In our experiments we choose the weights to be equal for all features. This approach weights more the precision associated with gesture production, and emphasises dexterity and robustness of bimanual motions.

In the probability-based approach, the probability of the window feature to be an occurrence of the model is computed as:

(12)

The overall probability is computed as a weighted sum of all feature probabilities, i.e., a mixture. We again choose the weights to be equal for all features. When we use probabilities, we consider the gesture as a whole, and therefore we account for small variations in the gesture execution speed.

4 Experimental Evaluation

4.1 Experimental Setup

In all the experiments, we adopt two smartwatches LG G watch R W110 as sensing devices (Android Wear 1.0, CPU Quad-Core 1.2GHz Qualcomm Snapdragon 400, 4GB/512MB RAM) equipped with a tri-axial accelerometer. The sampling frequency is 40Hz. The smartwatches automatically sync on startup with the smartphone they are paired with. By pairing the two smartwatches with the same smartphone, we ensure that they are synchronised with each other with a precision satisfying the requirements of our application.

Figure 6: Open and close curtains.
Figure 7: Sweep the floor.
Figure 8: Fill a cup with tap water.

We consider five bimanual gestures:

  • Open and close curtains (OCC). Extend and retract lateral-sliding curtains by pulling the connecting cords with an alternated up-and-down movement of the hands. Keep pulling until the curtain is fully closed or opened (see Figure 6).

  • Sweep the floor (SWP). Pull a conventional broom from left to right, to sweep the floor. Three strokes are required (see Figure 7).

  • Fill a cup with tap water (FCOT). With the right hand, take a cup from the sink and hold it below the tap, while rotating the tap’s handle with the left hand to fill the cup with water (see Figure 8).

  • Take a bottle from the fridge (RFF). With the right hand, open the door of a small fridge, then, with the left one, take a bottle from it. Lastly, close the fridge door with the right hand (see Figure 9).

  • Open a wardrobe (WO). Open a two-doors small wardrobe moving the two hands concurrently (see Figure 10).

Figure 9: Take a bottle from the fridge.
Figure 10: Open a wardrobe.

Intuitively, the gestures open and close curtains, sweep the floor and open a wardrobe fully constrain the movement of the two hands, while the gestures fill a cup with tap water and take a bottle from the fridge allow for more freedom in their execution, as far as synchronisation between the arms is concerned. Moreover, the gestures sweep the floor and open a wardrobe imply that the two hands are moved concurrently, while the gestures open and close curtains, fill a cup with tap water and take a bottle from the fridge mostly require the two hands to be moved in sequence. Lastly, the gestures open and close curtains and sweep the floor are recursive, i.e., composed of a number of repetitions of simpler movements, while the gestures open a wardrobe, fill a cup with tap water and take a bottle from the fridge are non-recursive.

Figures 6-10 show pictures of an execution of the gesture on the left hand side and the acceleration measured at the two wrists during the execution of the same gesture on the right hand side. The impact of the aforementioned gesture characteristics (constrained/not constrained, concurrent/sequential, recursive/not recursive) on the accelerations measured at the wrist, and therefore on the considered modelling and comparison procedures, is not known: finding it is one of the goals of the experiments we have conducted.

For each gesture, we collected a dataset of recordings from volunteers with age ranging from to years old. The volunteers, wearing the smartwatches, have been asked to autonomously start and stop the recordings and, once an experimenter described the gesture, to perform it in a natural way. All repetitions were supervised. In addition to this dataset, we asked a number of volunteers to take some recordings in real-life conditions. They have been asked to clean a room and, amidst the other activities, to perform the five bimanual gestures of interest. Volunteers could freely choose the timing and sequence of the gestures, and the choices have been annotated by an experimenter.

4.2 Performance Analysis

We tested the four combinations of modelling and comparison procedures in terms of the standard statistical measures of accuracy, precision and recall by using k-fold cross validation on the collected dataset. For all gestures, we split the dataset in groups of recordings each and iteratively used groups as training dataset and the remaining group for validation.

Figure 11: Results of k-fold cross validation for the modelling approach and probability-based comparison. The bottom row reports the recall measures, while the rightmost column reports the precision measures. The purple cell reports the overall accuracy of the system.

The results of the k-fold cross validation on the four combinations of modelling and comparison procedures are shown in Figures 11-14. In all figures, the first five rows/columns refer to the gestures open and close curtains (OCC), sweep the floor (SWP), fill a cup with tap water (FCOT), take a bottle from the fridge (RFF) and open a wardrobe (WO). The yellow row/column collectively represents all the gestures which are not among the modelled ones (N.A.). The columns denote the true labels of all validation recordings (i.e., the gestures actually performed during each of them), while the rows denote the labels assigned by the recognition system. Since, collectively, the validation dataset is composed of recordings per modelled gesture, a perfect recognition system would show recordings in the first five green cells (i.e., with the predicted label matching the true label) and none in the red cells (meaning that the recording of a gesture has been classified as an occurrence of another gesture) or in the yellow cells (meaning that the recording of a gesture has been classified as an occurrence of an unknown gesture).

Figure 12: Results of k-fold cross validation for the modelling approach and distance-based comparison. The bottom row reports the recall measures, while the rightmost column reports the precision measures. The purple cell reports the overall accuracy of the system.

Column of Figure 11 reports that, out of validation recordings actually referring to the gesture open and close curtains, were correctly labelled as occurrences of that gesture and one was labelled as an occurrence of the gesture take a bottle from the fridge. This analysis allows for computing the recall performance of the system, which corresponds to the ratio between the number of recordings of gesture correctly labelled as occurrences of gesture and the overall number of recordings of gesture

. The recall values for all gestures are listed in the bottom row of the confusion matrix.

Figure 13: Results of k-fold cross validation for the modelling approach and probability-based comparison. The bottom row reports the recall measures, while the rightmost column reports the precision measures. The purple cell reports the overall accuracy of the system.

A similar analysis of the rows of the confusion matrix allows for assessing the precision performance of the system. As an example, row of Figure 11 reports that, out of recordings labelled as occurrences of the gesture sweep the floor, were true recordings of that gesture, while one was a recording of the gesture fill a cup with tap water. The precision metric measures the ratio between number of recordings of gesture correctly labelled as occurrences of gesture and the overall number of recordings labelled as occurrences of gesture . The precision values for all gestures are listed in the rightmost column of the confusion matrix.

Figure 14: Results of k-fold cross validation for the modelling approach and distance-based comparison. The bottom row reports the recall measures, while the rightmost column reports the precision measures. The purple cell reports the overall accuracy of the system.

Lastly, the aggregated analysis of the number of correct labels over the total number of recordings, i.e., the accuracy performance of the system, is reported in the purple cell at the bottom-right corner. As an example, the recognition system adopting the implicit correlation modelling approach ( and probability-based comparison, whose confusion matrix in shown in Figure 11, has an overall accuracy of .

4.3 Real-life Conditions

Scenario #1 Scenario #2
SWP WO
RFF
SWP FCOT
OCC
Table 1: Experiment scripts for the real-life tests.

As an additional test for assessing the system’s performance, we asked two volunteers to wear the smartwatches at the wrists while carrying out cleaning chores of their choice in a room. The first volunteer was instructed to perform twice, amidst the other activities, the gesture sweep the floor, while the second volunteer was instructed to perform each of the other gestures once, as summarised in Table 1. Both tests were supervised by an experimenter, annotating the time when each gesture of interest was performed.

Figure 15: Acceleration streams registered during the execution of the real-life scenario 1. Red circles denote the two executions of the gesture sweep the floor.
Figure 16: Acceleration streams registered during the execution of the real-life scenario 2. Red circles denote, from left to right, the execution of the gestures open a wardrobe, take a bottle from the fridge, fill a cup with tap water and open and close curtains.

Figure 15 shows the accelerations registered at the wrists of the first volunteer, while performing the real-life scenario 1, while Figure 16 shows the accelerations registered at the wrists of the second volunteer, while performing the real-life scenario 2. The red circles denote all occurrences of the modelled bimanual gestures. For these tests we used the whole dataset of recordings per gesture previously described as training set for the creation of the models.

Figure 17: Output of the system for the recording of scenario 1 shown in Figure 15. The left hand graph refers to the modelling approach and probability-based comparison, while the right hand side refers to the modelling approach and distance-based comparison.
Figure 18: Output of the system for the recording of scenario 1 shown in Figure 15. The left hand graph refers to the modelling approach and probability-based comparison, while the right hand side refers to the modelling approach and distance-based comparison.
Figure 19: Output of the system for the recording of scenario 2 shown in Figure 16. The left hand graph refers to the modelling approach and probability-based comparison, while the right hand side refers to the modelling approach and distance-based comparison.
Figure 20: Output of the system for the recording of scenario 2 shown in Figure 16. The left hand graph refers to the modelling approach and probability-based comparison, while the right hand side refers to the modelling approach and distance-based comparison.

Figures 17-20 show the output of the recognition system, for the first (Figures 17 and 18) and the second scenario (Figures 19 and 20), for the four combinations of modelling and comparison procedures. In all figures, the x-axis denotes time and the y-axis denotes the probability or inverse distance value of the models. The TP box marks the true positive recognitions, i.e., all the occurrences of a modelled gesture which are correctly recognised. The FP box marks the false positive recognitions, i.e., all the occurrences of gestures which have been assigned the wrong label.

4.4 Discussion

As Figures 11-14 show, the combination allowing for the best recognition performance relies on the implicit correlation modelling approach and the probability-based comparison: this system achieves an overall accuracy of (see Figure 11) and it retains good recognition performance, especially in terms of precision, for all modelled gestures (the gesture with worst recall is open a wardrobe, with , while the gesture with worst precision is fill a cup with tap water, with ). Interestingly enough, the combination resulting in the worst recognition performance ( overall accuracy) relies on the explicit correlation modelling approach and the same probability-based comparison (see Figure 13), thus confirming that there is a tight relation between the modelling and comparison procedures. The same modelling approach, with the distance-based comparison procedure, significantly improves its performance ( overall accuracy, see Figure 14).

For all four considered combinations, the gestures open and close curtains and sweep the floor consistently achieve the highest precision and recall values, which seems to suggest that recursive motions (i.e., composed of the repeated execution of simple movements), regardless whether they involve the concurrent or sequential motion of the two arms, are easier to model and recognise. In other words, repeated gestures produce more stable patterns as far as the detection and classification system is concerned. Conversely, the gestures open a wardrobe and take a bottle from the fridge consistently achieve the lowest precision and recall values, with the latter performing especially poorly with the implicit correlation modelling approach. These results, albeit preliminary, seem to confirm our intuition that the explicit modelling approach is much more sensitive to small differences between run-time streams and the corresponding model than the implicit correlation modelling approach.

In accordance with the k-fold validation analysis, also in the real-life tests the combinations achieving the best performance are the implicit correlation modelling approach with probability-based comparison and the explicit correlation modelling approach with distance-based comparison, which, on the whole, successfully recognise respectively three and four of the six gestures of interest.

Real-life tests highlight a difference in the pattern of the recognition labels between the probability-based and distance-based comparison procedures. As Figures 17-20 show, in the second case the labels follow a smoother pattern, which reduces the number of false positive recognition. The adoption of reasoning techniques analysing label patterns to increase the recognition accuracy, has been proved effective in the case of a single stream Bruno14b and may lead to significant improvements also in the case of bimanual gestures.

What these results have to say about the way we currently understand bimanual gestures in humans? Not surprisingly, the best results we achieve involve implicit correlation with probability-based comparison. Whilst with implicit correlation we do not assume the motion of the two hands/arms to be correlated explicitly, probability-based comparison is more robust with respect to small variations in the phases of the two hands/arms. If we had assumed the two limbs to be explicitly correlated in motor space, i.e., through intrinsic coordinates, we would have constrained the phases of the two hands/arms to be perfectly tuned. This is not what happens in practice, as it is exemplified in the experiments by Heuer and colleagues Heueretal1998 and Byblow et al Byblowetal2000 . Since, in real-world gesture execution, perfect synchronisation seldom happens, a better result is obtained – on average – when we allow for the maximum flexibility in gesture execution. Probability-based measures enforce such flexibility even more, because they do not constraint the distance metric to be tied to a specific time instant of the gesture execution.

Finally, it is noteworthy that the use of Gaussian mixtures allows for determining what parts of the motion are more relevant, for those are characterised by lower amplitudes of the covariance. As a consequence, the covariance matrix associated with each element in the model assures to give a proper weight to the sample itself. This allows to consider the differing importance of the various gesture’s phases in the recognition phase, therefore taking into account motion dexterity and robustness.

The combination of the two methods for modelling and recognition allows us to support the Bernstein’s intuition about motion constraints Bernstein1967 , i.e., (i) variations in degrees of freedom affecting motion performance are constrained (in our case, by low covariance values in intra-arm correlation of inertial data); (ii) variation in degrees of freedom that do not affect task performance can be unconstrained (again, by larger values in the covariance in intra-arm correlation); (iii) co-variations between gesture-relevant degrees of freedom not impacting on performance are permitted (by considering implicit correlations among the two hands/arms).

5 Conclusions

This paper discusses design choices involved in the detection and classification of bimanual gestures in unconstrained environments. The assumption we make is the availability of inertial data originating from two distinct sensing points, conveniently located at the wrist, given the availability of such COTS sensors as smartwatches. Our models are grounded with respect to Gaussian Mixture Modelling (GMM) and Gaussian Mixture Regression (GMR), which we use as basic modelling procedure. Starting from these two positions, we compare different modelling (i.e., explicit and implicit correlations between the two hands/arms) and classification techniques (i.e., based on distance and probability-related considerations), which are inspired by literature about the representation of bimanual gestures in the brain. Our architecture allows for different combinations of modelling and classification techniques. Furthermore, it can be extended as a framework to support reproducible research.

Experiments show results related to generic bimanual activities, which have been selected on the basis of three main parameters: (not) constraining the two hands by a physical tool, (not) requiring a specific sequence of single-hand gestures, being recursive (or not). The best results are obtained when considering an implicit coordination among the two hands/arms (i.e., the two motions are modelled separately) and using a probability-based distance for classification (i.e., the specific timing characteristics of the trajectories are considered only to a limited extent). This seems to confirm a few insights from the literature related to motor control of bimanual gestures, and opens up a number of interesting research questions for the upcoming future.

References

References

  • (1)

    J. Kelso, Phase transitions and critical behaviour in human bimanual coordination, American Journal of Physiology - Regulatory 15 (1984) R1000–R1004.

  • (2) H. Haken, J. Kelso, H. Bunz, A theoretical model of phase transitions in human hand movements, Biological Cybernetics 51 (1985) 347–356.
  • (3) J. Wiles, A. Leibing, N. Guberman, J. Reeve, R. Allen, The meaning of “ageing in place” to older people, The gerontologist.
  • (4) F. Mechsner, D. Kerzel, G. Knoblich, W. Prinz, Perceptual basis of bimanual coordination, Nature 414 (2001) 69–72.
  • (5) T. Sakurada, K. Ito, H. Gomi, Bimanual motor coordination controlled by cooperative interactions in intrinsic and extrinsic coordinates, European Journal of Neuroscience 43 (2016) 120–130.
  • (6) H. Heuer, W. Spijkers, T. Kleinserge, H. van der Loo, C. Steglich, The time course fo cross-talk during the simultaneous specification of bimanual movement amplitudes, Experimental Brain Research 118 (1998) 381–392.
  • (7) W. Byblow, G. Lewis, J. Stinear, N. Austin, M. Lynch, The subdominant hand increases in the efficacy of voluntary alterations in bimanual coordination, Experimental Brain Research 131 (2000) 366–374.
  • (8) S. Swinnen, N. Wenderoth, Two hands, one brain: cognitive neuroscience of bimanual skill, Trends in Cognitive Science 8 (2004) 18–25.
  • (9) K. Kodama, N. Fukuyama, T. Inamura, Differing dynamics of interpersonal and interpersonal coordination: two-finger and four-finger tapping experiments, PLOS one 10 (6).
  • (10) P. Baltes, U. Lindenberger, Emergence of a powerful connection between sensory and cognitive functions across the adult life span: a new window to the study of cognitive ageing?, Psychological Ageing 12 (1997) 12–21.
  • (11) R. Sleimen-Malkoun, J.-J. Temprado, S. L. Hong, Aging induced loss of complexity and dedifferentiation: consequences for coordination dynamics within and between brain, muscular and behavioural levels, Frontiers in Ageing Neuroscience 6 (2014) 1–17.
  • (12) D. Rosenbaum, R. Cohen, S. Jax, R. V. der Wel, D. Weiss, The problem of serial order in behaviour: Lashley’s legacy, Human Movement Science 26 (2007) 525–554.
  • (13) T. Schack, K. Essig, C. Frank, D. Koester, Mental representation and motor imagery training, Frontiers in Human Neuroscience 8 (2014) 1–10.
  • (14) T. Schack, Measuring mental representations, in: G. Tenenbaum, R. Edlund, A. Kanata (Eds). Handbook of Measurement in Sport and Exercise Psychology, Human Kinetics, Urbana Champaign, IL, USA, 2012.
  • (15) N. Bernstein, The coordination and regulation of movements, Pergamon Press, Oxford, 1967.
  • (16) B. Bruno, F. Mastrogiovanni, A. Sgorbissa, T. Vernazza, R. Zaccaria, Human motion modeling and recognition: a computational approach, in: Proceedings of the 8th IEEE International Conference on Automation Science and Engineering (CASE 2012), Seoul, Korea, 2012.
  • (17) O. Lara, M. Labrador, A survey on human activity recognition using wearable sensors, IEEE Communications Surveys and Tutorials 15 (3) (2013) 1192–1209.
  • (18)

    J. Lester, T. Choudhury, N. Kern, G. Borriello, B. Hannaford, A hybrid discriminative/generative approach for modeling human activities, in: Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI 2005), Edinburgh, UK, 2005.

  • (19) M. Mathie, B. Celler, N. Lovell, A. Coster, Classification of basic daily movements using a triaxial accelerometer, Medical & Biological Engineering & Computing 42 (2004) 679–687.
  • (20) M. Dietrich, K. von Laerhoven, Recall your actions! using wearable activity recognition to augment the human mind, in: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2014), Seattle, Washington, USA, 2014.
  • (21) A. Bulling, U. Blanke, B. Schiele, A tutorial on human activity recognition using body-worn inertial sensors, ACM Computing Surveys 46 (3) (2014) 33:1–33:33.
  • (22) G. Krassnig, D. Tantinger, C. Hofmann, T. Wittenberg, M. Struck, User-friendly system for recognition of activities with an accelerometer, in: Proceedings of the 2010 International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth 2010), Munich, Germany, 2010.
  • (23) T. Mashita, K. Shimatani, M. Iwata, H. Miyamoto, D. Komaki, T. Hara, K. Kiyokawa, H. Takemura, S. Nishio, Human activity recognition for a content search system considering situations of smartphone users, in: Proceedings of the IEEE Workshop on Virtual Reality Short Papers and Posters (VRW 2012), Orange County, California, USA, 2012.
  • (24) M. Lee, A. Khan, T. Kim, A single tri-axial accelerometer-based real-time personal life log system capable of human activity recognition and exercise information generation, Personal and Ubiquitous Computing 15 (8) (2011) 887–898.
  • (25) D. Karantonis, M. Narayanan, M. Mathie, N. Lovell, B. Celler, Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring, IEEE Transactions on Information Technology in Biomedicine 10 (1) (2006) 156–167.
  • (26) H. Chul, P. Kiheon, Estimating accelerated body attitude using an inertial sensor, in: Proceedings of the 2009 International Joint Conference ICROS-SICE (ICROS-SICE 2009), Fukuoka, Japan, 2009.
  • (27) S. Bonnet, R. Heliot, A magnetometer-based approach for studying human movements, IEEE Transactions on Biomedical Engineering 54 (7) (2007) 1353–1355.
  • (28) A. Sharma, A. Purwar, Y. Lee, Y. Lee, W. Chung, Frequency based classification of activities using accelerometer data, in: Proceedings of the 2008 International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2008), Seoul, Korea, 2008.
  • (29) Y. Cho, Y. Nam, Y. Choi, W. Cho, Smartbuckle: Human activity recognition using a 3-axis accelerometer and a wearable camera, in: Proceedings of the 2nd International Workshop on Systems and Networking Support for Healthcare and Assisted Living Environments (HealthNet 2008), Breckenridge, Colorado, USA, 2008.
  • (30) S. Lee, K. Mase, Activity and location recognition using wearable sensors, IEEE Pervasive Computing 1 (3) (2002) 24–32.
  • (31) D. Minnen, T. Starner, J. Ward, P. Lukowicz, G. Troster, Recognizing and discovering human actions from on-body sensor data, in: Proceedings of the 2005 IEEE International Conference on Multimedia and Expo (ICME 2005), Amsterdam, The Netherlands, 2005.
  • (32) D. Olguin, A. Pentland, Human activity recognition: accuracy across common locations for wearable sensors, in: Proceedings of the 2006 IEEE International Symposium on Wearable Computers (ISWC 2006), Montreaux, Switzerland, 2006.
  • (33) T. Choudhury, S. Consolvo, B. Harrison, J. Hightower, A. LaMarca, L. LeGrand, A. Rahimi, A. Rea, G. Borriello, B. Hemingway, P. Klasnja, K. Koscher, J. Landay, J. Lester, D. Wyatt, D. Haehnel, The mobile sensing platform: an embedded activity recognition system, IEEE Pervasive Computing Magazine 7 (2) (2008) 32–41.
  • (34) B. Bruno, F. Mastrogiovanni, A. Sgorbissa, T. Vernazza, R. Zaccaria, Analysis of human behaviour recognition algorithms based on acceleration data, in: Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA 2013), Karlsruhe, Germany, 2013.
  • (35) S. Calinon, F. D’Halluin, E. Sauser, D. Caldwell, A. Billard, Learning and reproduction of gestures by imitation, IEEE Robotics & Automation Magazine 6 (2010) 44–54.
  • (36)

    P. Rousseeuw, Silhouettes: a graphical aid to the interpretation and validation of cluster analysis, Journal of Computational and Applied Mathematics 20 (1987) 53–65.

  • (37) P. Mahalanobis, On the generalized distance in statistics, Proceedings of the National Institute of Sciences of India 2 (1) (1936) 49–55.
  • (38) B. Bruno, F. Mastrogiovanni, A. Saffiotti, A. Sgorbissa, Using fuzzy logic to enhance classification of human motion primitives, in: Proceedings of the 15th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU 2014), Montpellier, France, 2014.