Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation

by   Sina Honari, et al.

Deep neural networks with alternating convolutional, max-pooling and decimation layers are widely used in state of the art architectures for computer vision. Max-pooling purposefully discards precise spatial information in order to create features that are more robust, and typically organized as lower resolution spatial feature maps. On some tasks, such as whole-image classification, max-pooling derived features are well suited; however, for tasks requiring precise localization, such as pixel level prediction and segmentation, max-pooling destroys exactly the information required to perform well. Precise localization may be preserved by shallow convnets without pooling but at the expense of robustness. Can we have our max-pooled multi-layered cake and eat it too? Several papers have proposed summation and concatenation based methods for combining upsampled coarse, abstract features with finer features to produce robust pixel level predictions. Here we introduce another model --- dubbed Recombinator Networks --- where coarse features inform finer features early in their formation such that finer features can make use of several layers of computation in deciding how to use coarse features. The model is trained once, end-to-end and performs better than summation-based architectures, reducing the error from the previous state of the art on two facial keypoint datasets, AFW and AFLW, by 30% and beating the current state-of-the-art on 300W without using extra data. We improve performance even further by adding a denoising prediction model based on a novel convnet formulation.


page 2

page 7

page 8

page 11


A New Multiple Max-pooling Integration Module and Cross Multiscale Deconvolution Network Based on Image Semantic Segmentation

To better retain the deep features of an image and solve the sparsity pr...

Hypercolumns for Object Segmentation and Fine-grained Localization

Recognition algorithms based on convolutional networks (CNNs) typically ...

Fractional Max-Pooling

Convolutional networks almost always incorporate some form of spatial po...

LIP: Local Importance-based Pooling

Spatial downsampling layers are favored in convolutional neural networks...

Auto-pooling: Learning to Improve Invariance of Image Features from Image Sequences

Learning invariant representations from images is one of the hardest cha...

Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks

Deep Neural Networks now excel at image classification, detection and se...

FeatGeNN: Improving Model Performance for Tabular Data with Correlation-based Feature Extraction

Automated Feature Engineering (AutoFE) has become an important task for ...

Please sign up or login with your details

Forgot password? Click here to reset