Context-driven Active and Incremental Activity Recognition

06/07/2019 ∙ by Gabriele Civitarese, et al. ∙ Università degli Studi di Milano 0

Human activity recognition based on mobile device sensor data has been an active research area in mobile and pervasive computing for several years. While the majority of the proposed techniques are based on supervised learning, semi-supervised approaches are being considered to significantly reduce the size of the training set required to initialize the recognition model. These approaches usually apply self-training or active learning to incrementally refine the model, but their effectiveness seems to be limited to a restricted set of physical activities. We claim that the context which surrounds the user (e.g., semantic location, proximity to transportation routes, time of the day) combined with common knowledge about the relationship between this context and human activities could be effective in significantly increasing the set of recognized activities including those that are difficult to discriminate only considering inertial sensors, and the ones that are highly context-dependent. In this paper, we propose CAVIAR, a novel hybrid semi-supervised and knowledge-based system for real-time activity recognition. Our method applies semantic reasoning to context data to refine the prediction of a semi-supervised classifier. The context-refined predictions are used as new labeled samples to update the classifier combining self-training and active learning techniques. Results on a real dataset obtained from 26 subjects show the effectiveness of the context-aware approach both on the recognition rates and on the number of queries to the subjects generated by the active learning module. In order to evaluate the impact of context reasoning, we also compare CAVIAR with a purely statistical version, considering features computed on context data as part of the machine learning process.



There are no comments yet.


page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The evolution of mobile computing in the last decades allowed to develop intelligent applications that continuously monitor our daily activities to provide context-aware services lara2013survey . The majority of activity recognition algorithms in the literature rely on supervised machine learning to infer the most likely performed activities by analyzing inertial sensors data kwapisz2011activity . One of the major drawbacks of those solutions is the cost of collecting the amount of labeled data required to reach a high recognition rate. Moreover, standard classifiers are trained once with available data, and the recognition model cannot evolve over time. To overcome these issues, semi-supervised and incremental approaches for activity recognition have been proposed abdallah2018activity . Those methods only require a small amount of training data to initialize the recognition model, while techniques like co-learning, self-learning or active learning are used to assign labels to unlabeled sensor data hossain2017active ; abdallah2015adaptive ; longstaff2010improving ; stikic2008exploring .

While the majority of semi-supervised methods showed to be effective on classifying few physical activities (e.g., walking, running, biking, etc.), their effectiveness on more complex and context-dependent activities is still unclear. Moreover, discriminating those activities which have similar motion patterns is still problematic. For instance, activities like walking and taking the stairs, or standing and taking the elevator are easily confused between them by purely statistical methods based on inertial sensors.

Considering the context which surrounds the user could be valuable information to mitigate these issues liao2006location ; RiboniB11 . Indeed, considering a rich description of the user’s context (e.g., semantic location, weather, traffic condition, speed, etc.) has the potential to enable the recognition of a wide set of activities which are a) highly dependent to the current context and b) difficult to recognize only considering inertial sensors. However, semi-supervised approaches rely on a small set of labeled data, while activities can be performed in a large number of possible context conditions. For this reason, directly using context-data as additional features in the machine learning process may not be as effective as expected.

In this paper, we consider this problem and we propose CAVIAR a Context-aware ActiVe and Incremental Activity Receognition system which combines semi-supervised learning and semantic context-aware reasoning. A real-time incremental classifier is in charge of analyzing inertial sensors data obtained from mobile devices to provide probability distributions over the possible activities. A knowledge-based reasoning engine is then used to exclude from the statistical predictions those activities which are highly unlikely considering context-data. The system’s output is the most likely activity from the resulting context refinement.

Following the semi-supervised approach, context-refined predictions are used to update the incremental classifier. When CAVIAR is confident about the refined prediction, it is provided as a new labeled sample to the incremental classifier. On the other hand, when it is not sufficiently confident, CAVIAR asks the ground truth to the user and uses the answer also as a new labeled sample to refine the machine learning model.

In order to evaluate CAVIAR, we acquired a large dataset of inertial sensor data and rich contextual information. Results on this dataset show that context-refinement is effective in improving the recognition rate and, at the same time, triggering a significantly lower number of queries.

The contributions of our work are the following:

  • We propose a novel method to combine context-aware reasoning with semi-supervised learning method for activity recognition.

  • We acquired a realistic labeled dataset of activities performed by subjects, collecting data both from inertial sensors data and several sources of context.

  • We performed an extensive evaluation of our approach on this dataset showing the crucial role of context-data and structured knowledge in improving semi-supervised activity recognition.

  • We show that using knowledge-based reasoning on context-data not only allows reaching higher recognition rates, but also to obtain a significantly lower number of queries in active learning compared to using context-data as additional features in the machine learning process.

The rest of the paper is organised as follows. Section 2 discusses related work. Section 3 describes the overall architecture of CAVIAR. Section 4 presents the CAVIAR method in details. Section 5 reports the experimental results. Section 6 discusses strengths and limitations of CAVIAR. Finally, Section 7 concludes the paper.

2 Related work

The recognition of physical activities using commonly available mobile devices (e.g., smartphones and smartwatches) is a widely explored research area lara2013survey ; shoaib2015survey . The majority of approaches in the literature rely on supervised methods to infer activities from inertial sensors data kwapisz2011activity ; gyorbiro2009activity ; sun2010activity ; bao2004activity ; BullingBS14 . While these methods allow reaching high recognition rates, the acquisition of a wide labeled dataset of activities is costly and often unfeasible CookFK13 .

In order to overcome these issues, few works proposed unsupervised learning techniques 

kwon2014unsupervised ; trabelsi2013unsupervised ; lee2009unsupervised . These methods aim to find activity patterns from unlabeled data with data mining techniques. However, the discovery of those patterns requires the acquisition of a large dataset of unlabeled data. Moreover, a certain amount of labeled data are still required in order to reliably associate each cluster with its corresponding activity class. Knowledge-based methods coupled with unsupervised learning have been proposed to automatically label activity traces chen2014ontology . While this methodology is suitable for smart-home activity recognition using environmental sensors, it is not directly applicable to the recognition of activities using the sensor data of mobile devices.

In order to combine the strengths of both supervised and unsupervised approaches, semi-supervised learning methods for activity recognition have been proposed abdallah2018activity ; stikic2008exploring ; guan2007activity ; longstaff2010improving . Those techniques use small labeled training sets to initialize the model, which is continuously enhanced using unlabeled data. In the literature, the semi-supervised approaches which have been mainly considered for activity recognition are self-learning, co-learning, and active learning. Self-learning methods exploit the starting small training set to classify unlabeled data longstaff2010improving . The most confident predictions are hence used to update the classifier. Co-learning involves multiple classifiers trained on different data perspectives. Those classifiers collaboratively improve their models exploiting their prediction’s confidences lee2014activity ; guan2007activity . Differently from self-learning and co-learning, active learning requires explicit feedback from the users in order to obtain labels for the most informative data (i.e., data where the classifier is uncertain about the performed activity) hoque2012aalo ; miu2015bootstrapping ; abdallah2015adaptive ; hossain2017active . Active learning proved to be particularly effective for semi-supervised activity recognition. However, for the sake of usability the number of triggered queries should be low.

Existing semi-supervised activity recognition methods in mobile computing are mainly based on the analysis of inertial sensors data to recognize a restricted number of physical activities miu2015bootstrapping ; abdallah2015adaptive ; lee2014activity ; huynh2006towards . Differently from those methods, we consider the context which surrounds the user to continuously update an incremental classifier through a combination of self-learning and active learning. This allows us to significantly extend the set of recognized activities and, at the same time, to better discriminate those activities which have similar motion patterns.

Even if context reasoning for activity recognition has been mainly investigated for smart-home environments rodriguez2014survey

and computer vision based systems 

akdemir2008ontology , its application to mobile computing applications is not completely new yurur2016context . The combination of machine learning and context-aware ontological reasoning for activity recognition with mobile devices was firstly explored in RiboniB11 . In that work, the output of a statistical classifier is refined considering the user’s semantic location. In saguna2013complex rich contextual information is used to improve activity recognition with a multi-layer approach. Differently from those methods, our system is semi-supervised and hence it only requires a small labeled dataset in order to be initialized. Moreover, our approach takes advantage of context-aware reasoning to continuously update and personalize the activity model.

3 CAVIAR system overview

The general architecture of our system is shown in Figure 1.

Figure 1: Overall architecture of our system

The user’s mobile devices continuously acquire data from different sources. One one hand, inertial sensors (e.g., accelerometer, magnetometer and gyroscope) are in charge of continuously streaming data about the physical movements of the user. On the other hand, data from other built-in sensors and devices (e.g., GPS) in combination with publicly available web services (e.g., weather service) are used to obtain data about user’s context. It is important to note that “context” is a very broad term which in the literature is used to model users situations at several level of abstractions PMCsurvey09 . For instance, even the user’s performed activity can be considered as high-level contextual information. In this paper, with context data we mainly indicate the information about the environment which surrounds the user. Examples of such context data are user’s current semantic location, his/her proximity to transportation routes, the current weather, the time of the day, the day of the week, etc.

Our hybrid semi-supervised and context-aware activity recognition algorithm is mainly divided into three steps. First, the stream of raw inertial sensors readings is processed by the Incremental Activity Recognition

module. This module first applies pre-processing methods like data cleaning, segmentation, and feature extraction to raw inertial sensors data. Then, a semi-supervised classifier associates to each feature vector a probability distribution over the possible activities. It is important to note that the activity model needs to be initialized with a small labeled training set in an offline phase.

The Incremental Activity Recognition module does not take into account context data. The main motivation is that semi-supervised methods rely on a rather small set of labeled data, while activities can be performed in a wide variety of different contexts. While it is feasible for a classifier to discriminate different motion patterns even with few labeled samples, learning their correlations with all the possible context conditions may be problematic. This is especially true when considering a wide set of activities. On the other hand, common-sense knowledge can be used to model the relationships between activities and contexts (e.g., it is unlikely that a user is using an elevator while she is in the city park). Hence, in the second step of our algorithm, the Semantic Refinement module applies knowledge-based reasoning to context data in order to exclude from the semi-supervised prediction those activities which are not consistent with the current context. In particular, context data needs to be pre-processed and translated into high-level facts, which are mapped to an ontology that models activities and contexts. Knowledge-based reasoning is then applied to evaluate which activities are context-consistent. The output of the Semantic Refinement module is a probability distribution over the context-consistent activities (which we will refer as refined prediction).

The third and the last step of our method consists of using the refined prediction to update the semi-supervised activity model. In particular, the Prediction Confidence Evaluation module evaluates the system’s confidence on the refined prediction. If the confidence is sufficiently high, the refined prediction is provided as a new labeled example to our incremental classifier. Otherwise, a query is triggered to the user in order to obtain the ground truth about the current activity, in order to update the recognition model accordingly.

In our architecture, the semi-supervised activity model is stored on a server and shared between all the participating users, which can collaboratively update it using our context-driven semi-supervised framework.

4 Methodology

In this section, we describe in details the different modules of our system introduced in Section 3.

4.1 Incremental activity recognition

The Incremental Activity Recognition module relies on a semisupervised classifier to derive from inertial sensors data a set of candidate activities performed by the user in real-time. In particular, the sensors’ signal is pre-processed and segmented in order to extract feature vectors. For each feature vector, the Incremental Activity Recognition module outputs a probability distribution over the considered activities. As we will explain later in this section, this probability distribution is refined by the Semantic Refinement module which is then used by the Prediction Confidence Evaluation module to update the incremental classifier with new labeled samples.

4.1.1 Segmentation, feature extraction and classification

In the following, we describe the data flow of inertial sensors data for activity recognition. Since a user may carry multiple mobile devices (e.g., a smartphone and a smartwatch), it is first necessary to temporally align their raw sensor data streams. In our experimental setup we considered for each device the data streams from accelerometer, magnetometer and gyroscope. We apply a median filter to reduce the noise in each signal from the streams. Then, we segment the streams of aligned sensor data. Each segment is defined as the set of inertial sensor data acquired during a specific time window of seconds. Each segment starts the next second with respect to the end of the previous segment, hence segments are contiguous and non-overlapping. The length is the same for all segments, and it must be chosen carefully according to the complexity of the considered activities banos2014window .

From each segment, we extract a wide set of statistical features which are well-known in the activity recognition literature lara2013survey . In particular, for each axis of each inertial sensor, we extract: average, variance, standard deviation, median, mean squared error, kurtosis, symmetry, zero-crossing rate, number of peaks, energy and difference between maximum and minimum. Finally, for each inertial sensor we compute the pearson correlation for each combination of its axes and magnitude on all of its axes. Hence, given 3-axis inertial sensors equipped in the user’s mobile devices, we compute features. We also apply standardization to each feature in order to further improve the recognition rate guyon2006introduction .

Example 4.0

Consider a user which carries a smartphone and a smartwatch, both equipped with 3-axis accelerometer, gyroscope and magnetometer. Hence, the overall number of inertial sensors is . In this scenario, our feature extraction mechanism would compute, for each segment, features.

For each feature vector computed from a segment , the incremental classifier outputs a probability distribution over the set of considered activities :

where is the probability that the segment was generated by the activity , , and . The probability distribution is forwarded to our Semantic Refinement module which will refine it using contextual information.

4.1.2 Activity model bootstrap

A crucial aspect of our semi-supervised framework is the activity model initialization. Indeed, without a proper bootstrap mechanism the semi-supervised model would have to discover each activity “on-the fly”, with a negative impact on the recognition rate. Hence, we initialize the semi-supervised model acquiring seconds of labeled data for each activity in order to obtain a balanced labeled dataset. Clearly, the parameter has a high impact both on the recognition accuracy and on the number of queries triggered to the users. However, as we motivated in the introduction, it is unfeasible to obtain a wide labeled dataset (i.e., choosing a high value of ). The choice of mainly depends on the number of considered activities and their complexity. In order to reduce the effort of acquiring and annotating data, it is hence important to use a small value of which allows achieving a reasonable recognition accuracy. We adopt an empirical approach to determine the value of , as illustrated in Section 5.

4.2 Semantic Refinement

The Semantic Refinement module is in charge of analyzing the context which surrounds the user to refine the prediction obtained by the Incremental Activity Recognition module. In order to achieve this task, this module relies on an ontology which models the relationships between contexts and activities. In particular, ontological reasoning is applied to exclude from the statistical prediction those activities which are unlikely considering the current context. In the following, we describe in details our context-aware semantic reasoning mechanism.

4.2.1 Activity and Context ontology

In order to enable semantic reasoning with context data for activity recognition, we extended the ActivO ontology RiboniB11 . That ontology defines a wide set of activities, semantic locations, artifacts (e.g., used by the user or part of the semantic locations), user’s postures, time granularities (e.g., day of the week, time of the day) and environmental information (e.g., temperature and light conditions). Details about ActiveO’s implementation can be found in RiboniB11 . We took advantage of the Protégé tool 111 to extend ActivO with several new activities, contextual data and their relationships. An example of those entities are shown in Figure 2. Our ontology considers several sources of context data: user’s semantic place, user’s recent route, weather conditions, proximity to public transportation stops and routes, surrounding traffic condition, user’s height variations, user’s speed, surrounding light, environment’s noise level and temporal context (e.g., time of the day, day of the week, month, …). Figure (a)a shows a portion of those context data modeled in our ontology, while Figure (b)b focuses on the set of considered semantic locations, including the ones classified by Google Places API 222 It is important to note that we distinguish symbolic locations/buildings (and their characteristics) from their use. This allows us to better model activities related to symbolic locations.

(a) An excerpt of context hierarchy
(b) An excerpt of symbolic locations hierarchy
Figure 2: Excerpts of our ontology

Due to the intrinsic open-world assumption of ontological reasoning, we explicitly state the necessary conditions which make activities possible or not possible in a given context. As we will explain later, such constraints are necessary to enable our context-aware refinement which is based on consistency reasoning. For instance, the activity TakingStairs (Figure (a)a) should take place at a location which may have stairs and the person should have a non-negative height variation. Another example is the activity MovingByCar (Figure (b)b): our ontology enforces that it should take place in an outdoor location which includes a road or a street, and that the car’s speed should be positive.

(a) Definition of the activity “taking stairs”
(b) Definition of the activity “moving by car”
Figure 3: Examples of activity definitions in our ontology

4.2.2 Context reasoning and refinement

For each activity candidate in the prediction , we use ontological reasoning to determine whether is consistent or not with the surrounding context. First, CAVIAR adds an axiom to represent an instance of Person which identifies the subject wearing the mobile devices. Then, it is necessary to instantiate relationships between the individual and context data. For this reason, context data collected by the mobile devices need to be mapped to ontological concepts. The majority of context-data has a mapping one-to-one with ontological entities. However, scalar values need to be discretized and mapped to the classes covered by the ontology. Finally, since we want to test the consistency of an activity with respect to the current context, CAVIAR will add an axiom which states that the user is performing . We define an activity context-consistent when the axioms created with the observed data as described above are consistent with respect to the domain knowledge. Note that the consistency check involves reasoning that is automatically performed in the logic used to specify the ontology (the specific language and reasoner used in CAVIAR are reported in Section 5).

Example 4.0

Bob is using CAVIAR. When the context reasoning task is triggered, Person(Bob) is added as a fact. Then, the context data gathered by the mobile devices is analyzed in order to expand the set of facts. Suppose that some Web service provided the information that Bob is in a park and that the speed value obtained by the GPS sensor is km/h. First, we have to instantiate the individuals for the context data: Park(place), MediumSpeed(speed). Note that the raw speed value obtained by the GPS has been discretized in order to be mapped to an ontological concept. Then, the relationships between Bob and context data are added as facts: hasCurrentSymbolicLocation(Bob, place), hasCurrentSpeed(Bob, speed). Finally, in order to test whether the activity Running is context-consistent, we add the axioms Running(currentActivity) and isPerforming(Bob, currentActivity). The consistency of the set of facts with respect to the domain knowledge will determine if the running activity is consistent according to the current Bob’s context.

Given the current context and the marginal probabilities obtained by the semi-supervised classifier , the goal of context refinement is to exclude those activities which are not context-consistent according to . For each activity class such that , we compute its consistency according to context as explained above. Each activity which is not context-consistent is removed from the probability vector. The refined vector is finally normalized in order to preserve the properties of a probability distribution. The output is a new refined probability vector such that each is a context-consistent activity according to , and . Note that an activity is usually not context-consistent when ontology’s necessary constraints discussed in Section 4.2.1 are violated.

Example 4.0

Continuing Example 2, suppose that Bob is actually running. According to the Incremental Activity Recognition classifier, the current probability distribution is cycling, running, walking and standing. Thanks to a dedicated Web service, it is possible to know that Bob is currently in a pedestrian area of the park where bicycles are not allowed. According to the ontology, cycling is not context-consistent since it should not be performed in pedestrian areas. Hence, the resulting context-refined probability distribution is running, walking and standing.

4.3 Prediction Confidence Evaluation

The Prediction Confidence Evaluation module is in charge of using context-refined predictions to update the activity model with new labeled samples combining self-learning and active learning. Moreover, it also applies oversampling methods in a real-time fashion to further improve the recognition of minority activity classes.

4.3.1 Semi-supervised model update

In order to update the activity model, we apply an uncertainty sampling strategy to identify in real-time the confidence level of each refined prediction lewis1994heterogeneous . Given a context-refined prediction , we denote as the probability value of the most likely activity. If exceeds a threshold , we consider the system very confident on the current classification and we update the semi-supervised activity model with a new labeled sample (self-learning). Otherwise, if is below a threshold , where , we consider the system uncertain about the current activity being performed by the user. In this case, an active learning process is started by asking the user to provide the ground truth about the current activity (active learning), in order to update the model accordingly. For the sake of usability, CAVIAR presents the user with a few predefined options picked from the most probable activities. When we do not consider the current prediction to update the semi-supervised activity model.

4.3.2 Incremental data balancing

During everyday life some activities are performed on average with a lower frequency than others (e.g., the amount of time that a subject spends on an elevator is usually less than the time he/she spends walking). Hence, adding new labeled samples to the incremental classifier without taking into account this aspect may lead to an unbalanced classification model and subsequently to a poor recognition rate on the “minority” activity classes. For this reason, our ontology also describes (using OWL2 properties) which activity classes are known to be “minority” according to common-sense knowledge. We define as the set of “minority” activity classes according to the ontology.

We adopt the well-known SMOTE technique chawla2002smote in real-time to balance the activity model by generating synthetic samples of “minority” activity classes. In particular, when a segment labeled as is provided as a new labeled example to update the model, we create additional synthetic labeled samples using SMOTE to further improve the classifier.

5 Experimental evaluation

In order to evaluate our system, we developed a data collection infrastructure to acquire a real labeled dataset consisting of both inertial sensor data and context data. Indeed, to the best of our knowledge, there is not a publicly available dataset of labeled activities which incorporates the rich contextual information we need. In this section, we describe our experimental setup, the collected dataset, and the evaluation results of CAVIAR.

5.1 Experimental setup

In our experimental setup, users carry their smartphone in the pants’ front pocket and a smartwatch on the dominant hand’s wrist. In our data collection, we used a Nexus 5x as smartphone and an LG G Watch R as smartwatch. Dedicated applications run on the devices to continuously collect and transmit sensor measurements to a Java server which stores data in a MongoDB database. Both mobile devices communicate to the server every seconds inertial sensors readings collected from their accelerometer, gyroscope and magnetometer.

Context data is acquired by the smartphone application considering built-in sensors as well as publicly available web services. The considered built-in sensors are: the barometer to get insights about height variations, the luminosity sensor, the microphone to obtain the environment’s noise level and the GPS to obtain the user’s location and speed. The considered web services are the following:

The application also collects temporal context like: the moment of the day (e.g., morning, afternoon, evening), the day of the week, the season, etc. Every

seconds, context data is transmitted to the server.

Besides data acquisition, the mobile applications have user-friendly interfaces which allow users to annotate data in real-time. Examples of such interfaces are shown in Figure 4.

(a) Smartphone interface
(b) Smartwatch interface
Figure 4: Annotation interfaces

During the acquisition, we asked the users to use the smartwatch interface to label activities in order to spend little time in annotation and, at the same time, to make the acquisitions more realistic.

5.2 Dataset description

We acquired a dataset involving volunteers aged between and . The activities acquired in this dataset are the following: walking, running, standing, lying, sitting, stairs up, stairs down, elevator up, elevator down, cycling, moving by car, sitting on transport, standing on transport and brushing teeth. Overall, we recorded almost hours of labeled sensor data ( activity instances). Table 1 summarizes how many minutes of data we acquired for each activity.

Activity Minutes
Standing 52
Sitting 56
Lying 40
Walking 96
Running 24
Cycling 24
Brushing teeth 16
Stairs up 16
Stairs down 16
Elevator up 8
Elevator down 16
Sitting on transport 60
Standing on transport 60
Moving by car 40
Overall 524
Table 1: Number of minutes acquired for each activity

Table 1 shows that the dataset is unbalanced. As predictable, activities like taking the elevator or brushing teeth have been executed for a significantly shorter time than others like walking.

The annotated sensor data has been acquired in different contexts, which include being at the office, going around in the city (Milan), driving, using public transportations, cycling and being at home. Even if the volunteers self-annotated their activities using the smartwatch, the execution of their activities was partially supervised. For the sake of this work, during data acquisition we were close to the volunteers to make sure that there was no technical problem with data acquisition.

5.3 Results

In the following, we present the results of CAVIAR considering the dataset described above. As incremental classifier, we use Online Random Forest 

saffari2009line taking advantage of the Java implementation proposed in sztyler2017online . The motivation is that Online Random Forest is the incremental version of the well-known classifier Random Forest, which proved to be one of the most effective classifiers for activity recognition sztyler2016onbody . As OWL2 reasoner we used HermiT glimm2014hermit in combination with the Java OWL APIs horridge2011owl .

In order to evaluate the effectiveness of our technique, we also implemented two additional methods which do not rely on the Semantic Refinement module. The former is called No context, since it only considers inertial sensor data to recognize activities. In particular, it combines the Incremental Activity Recognition module (see Section 4.1) and the Prediction Confidence Evaluation module (see Section 4.3.1) without applying our context-refinement.

The latter method is called Context as features. Similarly to No context, this method does not rely on the Semantic Refinement module to refine activity predictions. However, this method incorporates context data directly in the feature vectors generated by the feature extraction mechanism presented in Section 4.1.1. In particular, this method extracts a) statistical features (average, variance, difference between max and min) from numeric context data like speed or height variations and b) binary features for symbolic context data (i.e., semantic place, weather condition, proximity to transportation routes, etc.).

We used a leave-one-subject-out cross-validation approach to evaluate and compare CAVIAR with these two methods in terms of recognition rate and number of questions asked to the subjects. At each fold, we apply CAVIAR to subjects to collaboratively update the activity model, which is initialized considering minute of samples for each activity. The data of the remaining subject is used to evaluate the recognition rate and the number of questions asked to the subject.

We empirically determined that the optimal window size is , while the thresholds for semi-supervised updates are and respectively. Table 2 shows the results (in terms of overall F1 score).

Without Context
Activity Context as features CAVIAR
Elevator up 0.0 0.04 0.70
Elevator down 0.02 0.71 0.83
Moving by car 0.85 0.85 0.74
Brushing teeth 0.87 0.91 0.93
Running 0.97 0.97 0.99
Sitting 0.96 0.97 0.97
Going upstairs 0.38 0.69 0.76
Going downstairs 0.65 0.87 0.92
Cycling 0.97 0.90 0.89
Standing 0.86 0.93 0.95
Walking 0.89 0.94 0.95
Sitting transport 0.60 0.78 0.88
Standing transport 0.36 0.95 0.95
Avg F1 0.64 0.81 0.88
Table 2: Recognition rate (F- score) of CAVIAR compared with alternative approaches

The results clearly show that context data has a significant impact on the overall recognition rate. Indeed, it is evident that activities like going upstairs/downstairs and sitting/standing on transport (which are more difficult to recognize only considering motion patterns) highly benefit from context data. The positive impact of context in reducing confusion between activities is also notable in the confusion matrices reported in Figure 5.

(a) Without context
Figure 5: Comparison of confusion matrices. = Elevator Up, = Elevator Down, = Moving by Car, = Brushing Teeth, = Running, = Sitting, = Going upstairs, = Going Downstairs, = Cycling, = Standing, = Walking, = Sitting Transport, = Standing Transport.

From the confusion matrix we see, for example, that our approach allows the classifier to recognize the

elevator up activity, while the statistical methods confuse that activity with standing, since they have very similar motion patterns. In general, the fact that CAVIAR outperforms the Context as features approach shows the value of context reasoning with common knowledge with respect to using raw context data in a statistical approach.

On the negative side, we observe that the recognition rate of CAVIAR on the moving by car activity is lower than the ones obtained by the other approaches. Indeed, as Figure 5 shows, this activity is often confused by CAVIAR with cycling. This is due to the fact that the context data we have to characterize those activities is similar (e.g., they are both performed outdoor in the city traffic, with a variable speed, etc.). Hence, the semi-supervised model updates may propagate mis-predictions when the classifier has few examples of those activities.

Besides recognition rate, a crucial evaluation parameter is the number of questions triggered by the system, since it has a significant impact on usability. As Figure 6 shows, CAVIAR generates a significantly lower number of questions () with respect to No context () and Context as features ().

Figure 6: Percentage of triggered queries of CAVIAR compared with alternative approaches

Indeed, our semantic refinement technique exploits the ontology to remove from the prediction unlikely activities. This increases the confidence of the remaining activities and, at the same time, triggers our semi-supervised technique to update the activity model without bothering the user. Results indicate that the resulting system should provide a much better user experience by limiting the number of times a user is interrupted with a question.

In order to evaluate how the recognition rate and the number of triggered questions evolve over time, we used the method proposed in gama2013evaluating . First, we initialize the model as described above, considering minute of samples for each activity. Then, for each sample of the dataset (considering all subjects), we use the current recognition model to classify it and, depending on the prediction’s confidence, to update the model. The classification’s output (i.e., the resulting most likely activity) and the corresponding ground truth are stored to evaluate the recognition rate. In particular, we use a sliding window of samples with an overlap of % to periodically compute the overall F- score and the percentage of questions triggered to the users. Figure 7 shows the evolution of the F- score and the number of questions of CAVIAR with respect to the other two considered methods. It emerges that, with respect to No context and Context as features approaches, CAVIAR quickly reaches high recognition rates and a significant lower number of questions.

(a) F1 score
(b) percentage of questions
Figure 7: Evolution of the recognition model over time. Considered activities: Running, Sitting, Cycling, Standing, Walking, Elevator up, Elevator down, Going Upstairs, Going Downstairs, Brushing Teeth, Moving by car, Sitting transport, Standing transport

In order to further show the impact of context reasoning on activity recognition, we also evaluated our system considering different sets of activities. Figure 8 shows how our system performs on a restricted set of simple physical activities which are considered in the majority of related works. We observe that those activities are poorly characterized by the context which surrounds the user, while their motion patterns can be easily discriminated by purely statistical models.

(a) F1 score
(b) percentage of questions
Figure 8: Evolution of the recognition model over time. Considered activities: Running, Sitting, Cycling, Standing, Walking

Indeed, the results show that the recognition rate and the number of questions reached by CAVIAR are similar to the ones obtained by the No Context approach. It also strikes out that Context as features method has a slower learning improvement, due to the fact that additional context features add complexity to the semi-supervised activity model.

Finally, we also considered an “intermediate” set of activities. That set includes more context-dependent activities, like moving by car, brushing teeth, elevator and stairs. The results are shown in Figure 9.

(a) F1 score
(b) percentage of questions
Figure 9: Evolution of the recognition model over time. Considered activities: Running, Sitting, Cycling, Standing, Walking, Elevator, Stairs, Brushing Teeth, Moving by car

From the plots it emerges that context-data has a high impact both on the recognition rate and on the number of questions. Indeed, the No context method struggles to reach the performance of context-based solutions. Moreover, CAVIAR reaches high recognition rates and a low number of questions very quickly with respect to the context as features approach. Hence, it emerges that considering context-data allows to significantly expand the set of considered activities.

Combining these results with the ones showed in Figure 7, it emerges that CAVIAR maintains the same trend independently from the considered set of activities. On the other hand, Context as Features method’s performance degrades when the complexity of the considered activities increases.

These results confirm that CAVIAR is scalable with respect to the set of considered activities and that it is effective in significantly reducing the number of triggered questions.

6 Discussion

6.1 Context-data as features in machine learning versus knowledge-based context reasoning

From the results, it emerges that ontological reasoning on context-data is effective in improving the recognition rate of a semi-supervised classifier. However, one may point out that the performances in terms of F1 score are not too distant from the ones obtained using context-data directly as features. It is important to note that the main advantage of our knowledge-based method is the drastically reduced number of queries which are needed to reach those high recognition rates. This aspect has a significant impact on usability in a realistic scenario.

We also believe that our approach is significantly more scalable. One aspect to consider is that not every context source may be continuously available at the same time. Using context as features may lead to missing values in the feature vectors that can potentially be used to update the classifier, which in turn may negatively impact the recognition rate. Another aspect is whether it becomes necessary to include additional context data while the system is running. In the case of using context as features, there is the need for re-training the model from scratch with new labeled data. In our method, considering new context data simply means extending the ontology. Moreover, the ontology can also be periodically revised and improved.

6.2 Knowledge-engineering effort

The knowledge-engineering effort required to design a comprehensive ontology is very high. Indeed, this is a task which needs to be performed by a team of domain experts and knowledge engineers. However, the problem is mitigated by re-using existing ontologies. For instance, in this work we extended the ontology presented in 

RiboniB11 .

We also point out that human modeling of knowledge about contexts is also likely to be incomplete, since it is hard to model all the possible contexts in which activities are executed. To this end, a promising research direction is to exploit semi-supervised methods to continuously refine the ontology, and at the same time learn personalized contexts. Some preliminary results in this direction have been obtained for smart-home ADLs recognition nectar . However, applying this concept to CAVIAR is still an open and challenging issue.

7 Conclusion and future work

In this paper we proposed a novel method based on the combination of context-aware reasoning and semi-supervised learning for activity recognition. Our approach relies on ontological reasoning over context data and activities to continuously improve an incremental classifier in a semi-supervised fashion. We evaluated our method on a novel and rich dataset, showing the positive impact of context reasoning both on the recognition and on the number of queries triggered to the users.

A major limitation of our approach is the rigid formalism for semantic reasoning, which does not take into account the intrinsic uncertainty and incompleteness of common knowledge. and sensor-based systems. Hence, in future work we plan to evaluate alternative probabilistic formalisms to model and reason about context. Another interesting extension may be to consider temporal sequences of activities and contexts for enhancing the semantic refinement method. This implies introducing a form of temporal reasoning.

Finally, CAVIAR could be extended to create a personalized activity model for each user. Indeed, different subjects may have different physical characteristics and habits. This implies that there is a high variance of activity execution modalities and contexts among different users. The extension requires studying how to personalize the recognition model for each user and, at the same time, how to learn personalized contexts in order to further enhance the recognition rate.



  • (1) O. D. Lara, M. A. Labrador, et al., A survey on human activity recognition using wearable sensors., IEEE Communications Surveys and Tutorials 15 (3) (2013) 1192–1209.
  • (2) J. R. Kwapisz, G. M. Weiss, S. A. Moore, Activity recognition using cell phone accelerometers, ACM SigKDD Explorations Newsletter 12 (2) (2011) 74–82.
  • (3) Z. S. Abdallah, M. M. Gaber, B. Srinivasan, S. Krishnaswamy, Activity recognition with evolving data streams: A review, ACM Computing Surveys (CSUR) 51 (4) (2018) 71.
  • (4) H. S. Hossain, M. A. A. H. Khan, N. Roy, Active learning enabled activity recognition, Pervasive and Mobile Computing 38 (2017) 312–330.
  • (5) Z. S. Abdallah, M. M. Gaber, B. Srinivasan, S. Krishnaswamy, Adaptive mobile activity recognition system with evolving data streams, Neurocomputing 150 (2015) 304–317.
  • (6) B. Longstaff, S. Reddy, D. Estrin, Improving activity classification for health applications on mobile devices using active and semi-supervised learning, in: Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2010 4th International Conference on Pervasive Computing Technologies for Healthcare, IEEE, 2010, pp. 1–7.
  • (7) M. Stikic, K. Van Laerhoven, B. Schiele, Exploring semi-supervised and active learning for activity recognition, in: 2008 12th IEEE International Symposium on Wearable Computers, IEEE, 2008, pp. 81–88.
  • (8) L. Liao, D. Fox, H. Kautz, Location-based activity recognition, in: Advances in Neural Information Processing Systems, 2006, pp. 787–794.
  • (9) D. Riboni, C. Bettini, COSAR: Hybrid reasoning for context-aware activity recognition, Personal and Ubiquitous Computing 15 (3) (2011) 271–289.
  • (10) M. Shoaib, S. Bosch, O. D. Incel, H. Scholten, P. J. Havinga, A survey of online activity recognition using mobile phones, Sensors 15 (1) (2015) 2059–2085.
  • (11) N. Györbíró, Á. Fábián, G. Hományi, An activity recognition system for mobile phones, Mobile Networks and Applications 14 (1) (2009) 82–91.
  • (12) L. Sun, D. Zhang, B. Li, B. Guo, S. Li, Activity recognition on an accelerometer embedded mobile phone with varying positions and orientations, in: International conference on ubiquitous intelligence and computing, Springer, 2010, pp. 548–562.
  • (13) L. Bao, S. S. Intille, Activity recognition from user-annotated acceleration data, in: Pervasive Computing: Second International Conference, PERVASIVE 2004, Linz/Vienna, Austria, April 21-23, 2004. Proceedings, Springer, Berlin, Heidelberg, 2004, pp. 1–17.
  • (14) A. Bulling, U. Blanke, B. Schiele, A tutorial on human activity recognition using body-worn inertial sensors, ACM Computing Surveys 46 (3) (2014) 33:1–33:33.
  • (15)

    D. J. Cook, K. D. Feuz, N. C. Krishnan, Transfer learning for activity recognition: A survey, Knowledge and Information Systems 36 (3) (2013) 537–556.

  • (16) Y. Kwon, K. Kang, C. Bae, Unsupervised learning for human activity recognition using smartphone sensors, Expert Systems with Applications 41 (14) (2014) 6067–6074.
  • (17)

    D. Trabelsi, S. Mohammed, F. Chamroukhi, L. Oukhellou, Y. Amirat, An unsupervised approach for automatic activity recognition based on hidden markov model regression, IEEE Transactions on Automation Science and Engineering 10 (3) (2013) 829–835.

  • (18) M.-S. Lee, J.-G. Lim, K.-R. Park, D.-S. Kwon, Unsupervised clustering for abnormality detection based on the tri-axial accelerometer, ICCAS-SICE 2009 (2009) 134–137.
  • (19) L. Chen, C. D. Nugent, G. Okeyo, An ontology-based hybrid approach to activity modeling for smart homes., IEEE Trans. Human-Machine Systems 44 (1) (2014) 92–105.
  • (20) D. Guan, W. Yuan, Y.-K. Lee, A. Gavrilov, S. Lee, Activity recognition based on semi-supervised learning, in: Embedded and Real-Time Computing Systems and Applications, 2007. RTCSA 2007. 13th IEEE International Conference on, IEEE, 2007, pp. 469–475.
  • (21) Y.-S. Lee, S.-B. Cho, Activity recognition with android phone using mixture-of-experts co-trained with labeled and unlabeled data, Neurocomputing 126 (2014) 106–115.
  • (22) E. Hoque, J. Stankovic, Aalo: Activity recognition in smart homes using active learning in the presence of overlapped activities, in: Pervasive Computing Technologies for Healthcare (PervasiveHealth), 2012 6th International Conference on, IEEE, 2012, pp. 139–146.
  • (23) T. Miu, P. Missier, T. Plötz, Bootstrapping personalised human activity recognition models using online active learning, in: 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, IEEE, 2015, pp. 1138–1147.
  • (24) T. Huynh, B. Schiele, Towards less supervision in activity recognition from wearable sensors, in: 2006 10th IEEE International Symposium on Wearable Computers, Citeseer, 2006, pp. 3–10.
  • (25) N. D. Rodríguez, M. P. Cuéllar, J. Lilius, M. D. Calvo-Flores, A survey on ontologies for human behavior recognition, ACM Computing Surveys (CSUR) 46 (4) (2014) 43.
  • (26) U. Akdemir, P. Turaga, R. Chellappa, An ontology based approach for activity recognition from video, in: Proceedings of the 16th ACM international conference on Multimedia, ACM, 2008, pp. 709–712.
  • (27) Ö. Yürür, C. H. Liu, Z. Sheng, V. C. Leung, W. Moreno, K. K. Leung, Context-awareness for mobile sensing: A survey and future directions, IEEE Communications Surveys & Tutorials 18 (1) (2016) 68–93.
  • (28) S. Saguna, A. Zaslavsky, D. Chakraborty, Complex activity recognition using context-driven activity theory and activity signatures, ACM Transactions on Computer-Human Interaction (TOCHI) 20 (6) (2013) 32.
  • (29) C. Bettini, O. Brdiczka, K. Henricksen, J. Indulska, D. Nicklas, A. Ranganathan, D. Riboni, A survey of context modelling and reasoning techniques, Pervasive and Mobile Computing 6 (2) (2010) 161–180.
  • (30) O. Banos, J.-M. Galvez, M. Damas, H. Pomares, I. Rojas, Window size impact in human activity recognition, Sensors 14 (4) (2014) 6474–6499.
  • (31) I. Guyon, A. Elisseeff, An introduction to feature extraction, in: Feature extraction, Springer, 2006, pp. 1–25.
  • (32) D. D. Lewis, J. Catlett, Heterogeneous uncertainty sampling for supervised learning, in: Machine Learning Proceedings 1994, Elsevier, 1994, pp. 148–156.
  • (33)

    N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer, Smote: synthetic minority over-sampling technique, Journal of artificial intelligence research 16 (2002) 321–357.

  • (34) A. Saffari, C. Leistner, J. Santner, M. Godec, H. Bischof, On-line random forests, in: Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, IEEE, 2009, pp. 1393–1400.
  • (35) T. Sztyler, H. Stuckenschmidt, Online personalization of cross-subjects based activity recognition models on wearable devices, in: Pervasive Computing and Communications (PerCom), 2017 IEEE International Conference on, IEEE, 2017, pp. 180–189.
  • (36) T. Sztyler, H. Stuckenschmidt, On-body localization of wearable devices: An investigation of position-aware activity recognition, in: 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom), IEEE Computer Society, Washington, D.C., 2016, pp. 1–9.
  • (37)

    B. Glimm, I. Horrocks, B. Motik, G. Stoilos, Z. Wang, Hermit: an owl 2 reasoner, Journal of Automated Reasoning 53 (3) (2014) 245–269.

  • (38) M. Horridge, S. Bechhofer, The owl api: A java api for owl ontologies, Semantic Web 2 (1) (2011) 11–21.
  • (39) J. Gama, R. Sebastião, P. P. Rodrigues, On evaluating stream learning algorithms, Machine learning 90 (3) (2013) 317–346.
  • (40) G. Civitarese, D. Riboni, C. Bettini, Z. H. Janjua, R. Helaoui, Nectar: Knowledge-based collaborative active learning for activity recognition, in: Pervasive Computing and Communications (PerCom), 2018 IEEE International Conference on, IEEE, 2018.