DeepAI AI Chat
Log In Sign Up

The benefits of synthetic data for action categorization

by   Mohamad Ballout, et al.

In this paper, we study the value of using synthetically produced videos as training data for neural networks used for action categorization. Motivated by the fact that texture and background of a video play little to no significant roles in optical flow, we generated simplified texture-less and background-less videos and utilized the synthetic data to train a Temporal Segment Network (TSN). The results demonstrated that augmenting TSN with simplified synthetic data improved the original network accuracy (68.5 when adding 4,000 videos and 72.4 using simplified synthetic videos alone on 25 classes of UCF-101 achieved 30.71 Finally, results showed that when reducing the number of real videos of UCF-25 to 10 85.41


Generating Human Action Videos by Coupling 3D Game Engines and Probabilistic Graphical Models

Deep video action recognition models have been highly successful in rece...

Hierarchical Video Generation from Orthogonal Information: Optical Flow and Texture

Learning to represent and generate videos from unlabeled data is a very ...

Temporal Interpolation as an Unsupervised Pretraining Task for Optical Flow Estimation

The difficulty of annotating training data is a major obstacle to using ...

From Third Person to First Person: Dataset and Baselines for Synthesis and Retrieval

First-person (egocentric) and third person (exocentric) videos are drast...

IFOR: Iterative Flow Minimization for Robotic Object Rearrangement

Accurate object rearrangement from vision is a crucial problem for a wid...

The Devil in the Details: Simple and Effective Optical Flow Synthetic Data Generation

Recent work on dense optical flow has shown significant progress, primar...

Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks

In this paper, we study the challenging problem of categorizing videos a...