What Happened to My Dog in That Network: Unraveling Top-down Generators in Convolutional Neural Networks

11/23/2015
by   Patrick W. Gallagher, et al.
0

Top-down information plays a central role in human perception, but plays relatively little role in many current state-of-the-art deep networks, such as Convolutional Neural Networks (CNNs). This work seeks to explore a path by which top-down information can have a direct impact within current deep networks. We explore this path by learning and using "generators" corresponding to the network internal effects of three types of transformation (each a restriction of a general affine transformation): rotation, scaling, and translation. We demonstrate how these learned generators can be used to transfer top-down information to novel settings, as mediated by the "feature flows" that the transformations (and the associated generators) correspond to inside the network. Specifically, we explore three aspects: 1) using generators as part of a method for synthesizing transformed images --- given a previously unseen image, produce versions of that image corresponding to one or more specified transformations, 2) "zero-shot learning" --- when provided with a feature flow corresponding to the effect of a transformation of unknown amount, leverage learned generators as part of a method by which to perform an accurate categorization of the amount of transformation, even for amounts never observed during training, and 3) (inside-CNN) "data augmentation" --- improve the classification performance of an existing network by using the learned generators to directly provide additional training "inside the CNN".

READ FULL TEXT

page 6

page 9

page 12

research
11/25/2021

Quantised Transforming Auto-Encoders: Achieving Equivariance to Arbitrary Transformations in Deep Networks

In this work we investigate how to achieve equivariance to input transfo...
research
12/06/2017

Top-down Flow Transformer Networks

We study the deformation fields of feature maps across convolutional net...
research
06/19/2019

Learning Generalized Transformation Equivariant Representations via Autoencoding Transformations

Learning Transformation Equivariant Representations (TERs) seeks to capt...
research
07/22/2021

Geometric Data Augmentation Based on Feature Map Ensemble

Deep convolutional networks have become the mainstream in computer visio...
research
05/17/2020

FuCiTNet: Improving the generalization of deep learning networks by the fusion of learned class-inherent transformations

It is widely known that very small datasets produce overfitting in Deep ...
research
06/13/2023

Effects of Data Enrichment with Image Transformations on the Performance of Deep Networks

Images cannot always be expected to come in a certain standard format an...
research
04/26/2021

Invariant polynomials and machine learning

We present an application of invariant polynomials in machine learning. ...

Please sign up or login with your details

Forgot password? Click here to reset