Multi-task Self-Supervised Learning for Human Activity Detection

07/27/2019
by   Aaqib Saeed, et al.
6

Deep learning methods are successfully used in applications pertaining to ubiquitous computing, health, and well-being. Specifically, the area of human activity recognition (HAR) is primarily transformed by the convolutional and recurrent neural networks, thanks to their ability to learn semantic representations from raw input. However, to extract generalizable features, massive amounts of well-curated data are required, which is a notoriously challenging task; hindered by privacy issues, and annotation costs. Therefore, unsupervised representation learning is of prime importance to leverage the vast amount of unlabeled data produced by smart devices. In this work, we propose a novel self-supervised technique for feature learning from sensory data that does not require access to any form of semantic labels. We learn a multi-task temporal convolutional network to recognize transformations applied on an input signal. By exploiting these transformations, we demonstrate that simple auxiliary tasks of the binary classification result in a strong supervisory signal for extracting useful features for the downstream task. We extensively evaluate the proposed approach on several publicly available datasets for smartphone-based HAR in unsupervised, semi-supervised, and transfer learning settings. Our method achieves performance levels superior to or comparable with fully-supervised networks, and it performs significantly better than autoencoders. Notably, for the semi-supervised case, the self-supervised features substantially boost the detection rate by attaining a kappa score between 0.7-0.8 with only 10 labeled examples per class. We get similar impressive performance even if the features are transferred from a different data source. While this paper focuses on HAR as the application domain, the proposed technique is general and could be applied to a wide variety of problems in other areas.

READ FULL TEXT

page 12

page 15

page 19

research
09/28/2020

Sense and Learn: Self-Supervision for Omnipresent Sensors

Learning general-purpose representations from multisensor data produced ...
research
02/01/2022

ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition

A major bottleneck in training robust Human-Activity Recognition models ...
research
03/21/2018

Unsupervised Representation Learning by Predicting Image Rotations

Over the last years, deep convolutional neural networks (ConvNets) have ...
research
02/11/2021

SelfHAR: Improving Human Activity Recognition through Self-training with Unlabeled Data

Machine learning and deep learning have shown great promise in mobile se...
research
07/12/2022

A semi-supervised geometric-driven methodology for supervised fishing activity detection on multi-source AIS tracking messages

Automatic Identification System (AIS) messages are useful for tracking v...
research
08/26/2020

Self-Supervised Human Activity Recognition by Augmenting Generative Adversarial Networks

This article proposes a novel approach for augmenting generative adversa...
research
07/25/2020

Federated Self-Supervised Learning of Multi-Sensor Representations for Embedded Intelligence

Smartphones, wearables, and Internet of Things (IoT) devices produce a w...

Please sign up or login with your details

Forgot password? Click here to reset