-
Multi-Label Activity Recognition using Activity-specific Features
We introduce an approach to multi-label activity recognition by extracti...
read it
-
RSA: Randomized Simulation as Augmentation for Robust Human Action Recognition
Despite the rapid growth in datasets for video activity, stable robust a...
read it
-
From Real to Complex: Enhancing Radio-based Activity Recognition Using Complex-Valued CSI
Activity recognition is an important component of many pervasive computi...
read it
-
Disparity-Augmented Trajectories for Human Activity Recognition
Numerous methods for human activity recognition have been proposed in th...
read it
-
Combined Static and Motion Features for Deep-Networks Based Activity Recognition in Videos
Activity recognition in videos in a deep-learning setting---or otherwise...
read it
-
Towards Miss Universe Automatic Prediction: The Evening Gown Competition
Can we predict the winner of Miss Universe after watching how they strid...
read it
-
Fast classification using sparse decision DAGs
In this paper we propose an algorithm that builds sparse decision DAGs (...
read it
Improving Human Activity Recognition Through Ranking and Re-ranking
We propose two well-motivated ranking-based methods to enhance the performance of current state-of-the-art human activity recognition systems. First, as an improvement over the classic power normalization method, we propose a parameter-free ranking technique called rank normalization (RaN). RaN normalizes each dimension of the video features to address the sparse and bursty distribution problems of Fisher Vectors and VLAD. Second, inspired by curriculum learning, we introduce a training-free re-ranking technique called multi-class iterative re-ranking (MIR). MIR captures relationships among action classes by separating easy and typical videos from difficult ones and re-ranking the prediction scores of classifiers accordingly. We demonstrate that our methods significantly improve the performance of state-of-the-art motion features on six real-world datasets.
READ FULL TEXT
Comments
There are no comments yet.