Long-Term Feature Banks for Detailed Video Understanding

12/12/2018
by   Chao-Yuan Wu, et al.
4

To understand the world, we humans constantly need to relate the present to the past, and put events in context. In this paper, we enable existing video models to do the same. We propose a long-term feature bank---supportive information extracted over the entire span of a video---to augment state-of-the-art video models that otherwise would only view short clips of 2-5 seconds. Our experiments demonstrate that augmenting 3D convolutional networks with a long-term feature bank yields state-of-the-art results on three challenging video datasets: AVA, EPIC-Kitchens, and Charades.

READ FULL TEXT

Authors

page 1

page 2

page 6

04/24/2018

ECO: Efficient Convolutional Network for Online Video Understanding

The state of the art in video understanding suffers from two problems: (...
06/21/2021

Towards Long-Form Video Understanding

Our world offers a never-ending stream of visual stimuli, yet today's vi...
05/24/2021

Taylor saves for later: disentanglement for video prediction using Taylor representation

Video prediction is a challenging task with wide application prospects i...
01/20/2022

MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Long-Term Video Recognition

While today's video recognition systems parse snapshots or short clips a...
08/11/2016

Sequence Graph Transform (SGT): A Feature Extraction Function for Sequence Data Mining (Extended Version)

The ubiquitous presence of sequence data across fields such as the web, ...
03/10/2020

Super-reflective Data: Speculative Imaginings of a World Where Data Works for People

It's the year 2020, and every space and place on- and off-line has been ...
07/15/2021

Beyond Goldfish Memory: Long-Term Open-Domain Conversation

Despite recent improvements in open-domain dialogue models, state of the...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.