Two-Stream AMTnet for Action Detection

04/03/2020
by   Suman Saha, et al.
0

In this paper, we propose Two-Stream AMTnet, which leverages recent advances in video-based action representation[1] and incremental action tube generation[2]. Majority of the present action detectors follow a frame-based representation, a late-fusion followed by an offline action tube building steps. These are sub-optimal as: frame-based features barely encode the temporal relations; late-fusion restricts the network to learn robust spatiotemporal features; and finally, an offline action tube generation is not suitable for many real-world problems such as autonomous driving, human-robot interaction to name a few. The key contributions of this work are: (1) combining AMTnet's 3D proposal architecture with an online action tube generation technique which allows the model to learn stronger temporal features needed for accurate action detection and facilitates running inference online; (2) an efficient fusion technique allowing the deep network to learn strong spatiotemporal action representations. This is achieved by augmenting the previous Action Micro-Tube (AMTnet) action detection framework in three distinct ways: by adding a parallel motion stIn this paper, we propose a new deep neural network architecture for online action detection, termed ream to the original appearance one in AMTnet; (2) in opposition to state-of-the-art action detectors which train appearance and motion streams separately, and use a test time late fusion scheme to fuse RGB and flow cues, by jointly training both streams in an end-to-end fashion and merging RGB and optical flow features at training time; (3) by introducing an online action tube generation algorithm which works at video-level, and in real-time (when exploiting only appearance features). Two-Stream AMTnet exhibits superior action detection performance over state-of-the-art approaches on the standard action detection benchmarks.

READ FULL TEXT
research
02/23/2018

Real-Time End-to-End Action Detection with Two-Stream Networks

Two-stream networks have been very successful for solving the problem of...
research
08/31/2016

Efficient Two-Stream Motion and Appearance 3D CNNs for Video Classification

The video and action classification have extremely evolved by deep neura...
research
04/01/2019

Dance with Flow: Two-in-One Stream Action Detection

The goal of this paper is to detect the spatio-temporal extent of an act...
research
11/05/2021

KORSAL: Key-point Detection based Online Real-Time Spatio-Temporal Action Localization

Real-time and online action localization in a video is a critical yet hi...
research
01/04/2018

What have we learned from deep representations for action recognition?

As the success of deep models has led to their deployment in all areas o...
research
04/05/2017

Incremental Tube Construction for Human Action Detection

Current state-of-the-art action detection systems are tailored for offli...
research
08/04/2016

Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos

In this work, we propose an approach to the spatiotemporal localisation ...

Please sign up or login with your details

Forgot password? Click here to reset