Long-Term Feature Banks for Detailed Video Understanding

12/12/2018
by   Chao-Yuan Wu, et al.
4

To understand the world, we humans constantly need to relate the present to the past, and put events in context. In this paper, we enable existing video models to do the same. We propose a long-term feature bank---supportive information extracted over the entire span of a video---to augment state-of-the-art video models that otherwise would only view short clips of 2-5 seconds. Our experiments demonstrate that augmenting 3D convolutional networks with a long-term feature bank yields state-of-the-art results on three challenging video datasets: AVA, EPIC-Kitchens, and Charades.

READ FULL TEXT

page 1

page 2

page 6

research
04/24/2018

ECO: Efficient Convolutional Network for Online Video Understanding

The state of the art in video understanding suffers from two problems: (...
research
06/21/2021

Towards Long-Form Video Understanding

Our world offers a never-ending stream of visual stimuli, yet today's vi...
research
05/24/2021

Taylor saves for later: disentanglement for video prediction using Taylor representation

Video prediction is a challenging task with wide application prospects i...
research
01/20/2022

MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition

While today's video recognition systems parse snapshots or short clips a...
research
08/11/2016

Sequence Graph Transform (SGT): A Feature Extraction Function for Sequence Data Mining (Extended Version)

The ubiquitous presence of sequence data across fields such as the web, ...
research
03/10/2020

Super-reflective Data: Speculative Imaginings of a World Where Data Works for People

It's the year 2020, and every space and place on- and off-line has been ...
research
07/24/2023

Multiscale Video Pretraining for Long-Term Activity Forecasting

Long-term activity forecasting is an especially challenging research pro...

Please sign up or login with your details

Forgot password? Click here to reset