Learning Canonical Transformations

11/17/2020
by   Zachary Dulberg, et al.
0

Humans understand a set of canonical geometric transformations (such as translation and rotation) that support generalization by being untethered to any specific object. We explore inductive biases that help a neural network model learn these transformations in pixel space in a way that can generalize out-of-domain. Specifically, we find that high training set diversity is sufficient for the extrapolation of translation to unseen shapes and scales, and that an iterative training scheme achieves significant extrapolation of rotation in time.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

07/12/2018

HyperNets and their application to learning spatial transformations

In this paper we propose a conceptual framework for higher-order artific...
04/24/2014

A General Homogeneous Matrix Formulation to 3D Rotation Geometric Transformations

We present algebraic projective geometry definitions of 3D rotations so ...
06/30/2020

Is Robustness To Transformations Driven by Invariant Neural Representations?

Deep Convolutional Neural Networks (DCNNs) have demonstrated impressive ...
11/18/2019

Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring In Data

Equivariance is a nice property to have as it produces much more paramet...
12/03/2019

Learning Spatially Structured Image Transformations Using Planar Neural Networks

Learning image transformations is essential to the idea of mental simula...
06/27/2020

On the generalization of learning-based 3D reconstruction

State-of-the-art learning-based monocular 3D reconstruction methods lear...
04/12/2020

Learning Spatial Relationships between Samples of Image Shapes

Many applications including image based classification and retrieval of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.