Learning Local Invariant Mahalanobis Distances

02/04/2015
by   Ethan Fetaya, et al.
0

For many tasks and data types, there are natural transformations to which the data should be invariant or insensitive. For instance, in visual recognition, natural images should be insensitive to rotation and translation. This requirement and its implications have been important in many machine learning applications, and tolerance for image transformations was primarily achieved by using robust feature vectors. In this paper we propose a novel and computationally efficient way to learn a local Mahalanobis metric per datum, and show how we can learn a local invariant metric to any transformation in order to improve performance.

READ FULL TEXT
research
03/01/2017

Graph-based Isometry Invariant Representation Learning

Learning transformation invariant representations of visual data is an i...
research
08/31/2019

Towards Learning Affine-Invariant Representations via Data-Efficient CNNs

In this paper we propose integrating a priori knowledge into both design...
research
06/27/2012

Learning Invariant Representations with Local Transformations

Learning invariant representations is an important problem in machine le...
research
08/26/1999

A Differential Invariant for Zooming

This paper presents an invariant under scaling and linear brightness cha...
research
01/27/2022

On scale-invariant properties in natural images and their simulations

We study samples of natural images for which a set of statistical charac...
research
04/23/2023

No Free Lunch in Self Supervised Representation Learning

Self-supervised representation learning in computer vision relies heavil...
research
06/18/2018

Modularity Matters: Learning Invariant Relational Reasoning Tasks

We focus on two supervised visual reasoning tasks whose labels encode a ...

Please sign up or login with your details

Forgot password? Click here to reset