Unsupervised Feature Learning from Temporal Data

04/09/2015
by   Ross Goroshin, et al.
0

Current state-of-the-art classification and detection algorithms rely on supervised training. In this work we study unsupervised feature learning in the context of temporally coherent video data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning to metric learning and show that the trained encoder can be used to define a more temporally and semantically coherent metric.

READ FULL TEXT
research
12/18/2014

Unsupervised Learning of Spatiotemporally Coherent Metrics

Current state-of-the-art classification and detection algorithms rely on...
research
05/02/2015

Learning Temporal Embeddings for Complex Video Analysis

In this paper, we propose to learn temporal embeddings of video frames f...
research
04/01/2015

A Theory of Feature Learning

Feature Learning aims to extract relevant information contained in data ...
research
04/14/2021

Temporally-Coherent Surface Reconstruction via Metric-Consistent Atlases

We propose a method for the unsupervised reconstruction of a temporally-...
research
03/10/2021

VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples

MoCo is effective for unsupervised image representation learning. In thi...
research
10/09/2017

Multitask training with unlabeled data for end-to-end sign language fingerspelling recognition

We address the problem of automatic American Sign Language fingerspellin...
research
05/24/2019

Implicit Label Augmentation on Partially Annotated Clips via Temporally-Adaptive Features Learning

Partially annotated clips contain rich temporal contexts that can comple...

Please sign up or login with your details

Forgot password? Click here to reset