AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures

05/30/2019
by   Michael S. Ryoo, et al.
0

Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to a third dimension (using a limited number of space-time modules such as 3D convolutions) or by introducing a handcrafted two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream space-time convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/26/2018

Evolving Space-Time Neural Architectures for Videos

In this paper, we present a new method for evolving video CNN models to ...
research
04/10/2017

ActionVLAD: Learning spatio-temporal aggregation for action classification

In this work, we introduce a new video representation for action classif...
research
03/11/2019

Investigation on Combining 3D Convolution of Image Data and Optical Flow to Generate Temporal Action Proposals

In this paper, a novel two-stream architecture for the task of temporal ...
research
11/24/2018

Self-Supervised Video Representation Learning with Space-Time Cubic Puzzles

Self-supervised tasks such as colorization, inpainting and zigsaw puzzle...
research
03/17/2023

Leaping Into Memories: Space-Time Deep Feature Synthesis

The success of deep learning models has led to their adaptation and adop...
research
12/30/2017

A Unified Method for First and Third Person Action Recognition

In this paper, a new video classification methodology is proposed which ...
research
04/12/2017

Predictive-Corrective Networks for Action Detection

While deep feature learning has revolutionized techniques for static-ima...

Please sign up or login with your details

Forgot password? Click here to reset