Complex-Valued Autoencoders

08/20/2011
by   Pierre Baldi, et al.
0

Autoencoders are unsupervised machine learning circuits whose learning goal is to minimize a distortion measure between inputs and outputs. Linear autoencoders can be defined over any field and only real-valued linear autoencoder have been studied so far. Here we study complex-valued linear autoencoders where the components of the training vectors and adjustable matrices are defined over the complex field with the L_2 norm. We provide simpler and more general proofs that unify the real-valued and complex-valued cases, showing that in both cases the landscape of the error function is invariant under certain groups of transformations. The landscape has no local minima, a family of global minima associated with Principal Component Analysis, and many families of saddle points associated with orthogonal projections onto sub-space spanned by sub-optimal subsets of eigenvectors of the covariance matrix. The theory yields several iterative, convergent, learning algorithms, a clear understanding of the generalization properties of the trained autoencoders, and can equally be applied to the hetero-associative case when external targets are provided. Partial results on deep architecture as well as the differential geometry of autoencoders are also presented. The general framework described here is useful to classify autoencoders and identify general common properties that ought to be investigated for each class, illuminating some of the connections between information theory, unsupervised learning, clustering, Hebbian learning, and autoencoders.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2018

From Principal Subspaces to Principal Components with Linear Autoencoders

The autoencoder is an effective unsupervised learning model which is wid...
research
02/16/2020

Fair Principal Component Analysis and Filter Design

We consider Fair Principal Component Analysis (FPCA) and search for a lo...
research
04/05/2019

Is 'Unsupervised Learning' a Misconceived Term?

Is all of machine learning supervised to some degree? The field of machi...
research
08/02/2023

Regular Variation in Hilbert Spaces and Principal Component Analysis for Functional Extremes

Motivated by the increasing availability of data of functional nature, w...
research
01/15/2021

A General Framework for Hypercomplex-valued Extreme Learning Machines

This paper aims to establish a framework for extreme learning machines (...
research
10/31/2016

Complex-Valued Kernel Methods for Regression

Usually, complex-valued RKHS are presented as an straightforward applica...
research
09/27/2017

Learning Autoencoded Radon Projections

Autoencoders have been recently used for encoding medical images. In thi...

Please sign up or login with your details

Forgot password? Click here to reset