-
Egok360: A 360 Egocentric Kinetic Human Activity Video Dataset
Recently, there has been a growing interest in wearable sensors which pr...
read it
-
Multi-task Self-Supervised Learning for Human Activity Detection
Deep learning methods are successfully used in applications pertaining t...
read it
-
DiscrimNet: Semi-Supervised Action Recognition from Videos using Generative Adversarial Networks
We propose an action recognition framework using Gen- erative Adversaria...
read it
-
Cross-Subject Transfer Learning in Human Activity Recognition Systems using Generative Adversarial Networks
Application of intelligent systems especially in smart homes and health-...
read it
-
Video Jigsaw: Unsupervised Learning of Spatiotemporal Context for Video Action Recognition
We propose a self-supervised learning method to jointly reason about spa...
read it
-
Temporal Relational Reasoning in Videos
Temporal relational reasoning, the ability to link meaningful transforma...
read it
-
Learning Generalizable Physiological Representations from Large-scale Wearable Data
To date, research on sensor-equipped mobile devices has primarily focuse...
read it
Self-Supervised Human Activity Recognition by Augmenting Generative Adversarial Networks
This article proposes a novel approach for augmenting generative adversarial network (GAN) with a self-supervised task in order to improve its ability for encoding video representations that are useful in downstream tasks such as human activity recognition. In the proposed method, input video frames are randomly transformed by different spatial transformations, such as rotation, translation and shearing or temporal transformations such as shuffling temporal order of frames. Then discriminator is encouraged to predict the applied transformation by introducing an auxiliary loss. Subsequently, results prove superiority of the proposed method over baseline methods for providing a useful representation of videos used in human activity recognition performed on datasets such as KTH, UCF101 and Ball-Drop. Ball-Drop dataset is a specifically designed dataset for measuring executive functions in children through physically and cognitively demanding tasks. Using features from proposed method instead of baseline methods caused the top-1 classification accuracy to increase by more then 4 the contribution of different transformations on downstream task.
READ FULL TEXT
Comments
There are no comments yet.