Hierarchical Recurrent Filtering for Fully Convolutional DenseNets

10/05/2018
by   Jörg Wagner, et al.
0

Generating a robust representation of the environment is a crucial ability of learning agents. Deep learning based methods have greatly improved perception systems but still fail in challenging situations. These failures are often not solvable on the basis of a single image. In this work, we present a parameter-efficient temporal filtering concept which extends an existing single-frame segmentation model to work with multiple frames. The resulting recurrent architecture temporally filters representations on all abstraction levels in a hierarchical manner, while decoupling temporal dependencies from scene representation. Using a synthetic dataset, we show the ability of our model to cope with data perturbations and highlight the importance of recurrent and hierarchical filtering.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/09/2018

Functionally Modular and Interpretable Temporal Filtering for Robust Segmentation

The performance of autonomous systems heavily relies on their ability to...
research
10/02/2019

Variational Temporal Abstraction

We introduce a variational approach to learning and inference of tempora...
research
06/01/2016

Recurrent Fully Convolutional Networks for Video Segmentation

Image segmentation is an important step in most visual tasks. While conv...
research
06/20/2016

Detection and Tracking of Liquids with Fully Convolutional Networks

Recent advances in AI and robotics have claimed many incredible results ...
research
11/17/2020

Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video

Despite the recent success of single image-based 3D human pose and shape...
research
08/02/2016

Towards Learning to Perceive and Reason About Liquids

Recent advances in AI and robotics have claimed many incredible results ...

Please sign up or login with your details

Forgot password? Click here to reset