Derivation of Symmetric PCA Learning Rules from a Novel Objective Function

05/24/2020
by   Ralf Möller, et al.
0

Neural learning rules for principal component / subspace analysis (PCA / PSA) can be derived by maximizing an objective function (summed variance of the projection on the subspace axes) under an orthonormality constraint. For a subspace with a single axis, the optimization produces the principal eigenvector of the data covariance matrix. Hierarchical learning rules with deflation procedures can then be used to extract multiple eigenvectors. However, for a subspace with multiple axes, the optimization leads to PSA learning rules which only converge to axes spanning the principal subspace but not to the principal eigenvectors. A modified objective function with distinct weight factors had to be introduced produce PCA learning rules. Optimization of the objective function for multiple axes leads to symmetric learning rules which do not require deflation procedures. For the PCA case, the estimated principal eigenvectors are ordered (w.r.t. the corresponding eigenvalues) depending on the order of the weight factors. Here we introduce an alternative objective function where it is not necessary to introduce fixed weight factors; instead, the alternative objective function uses squared summands. Optimization leads to symmetric PCA learning rules which converge to the principal eigenvectors, but without imposing an order. In place of the diagonal matrices with fixed weight factors, variable diagonal matrices appear in the learning rules. We analyze this alternative approach by determining the fixed points of the constrained optimization. The behavior of the constrained objective function at the fixed points is analyzed which confirms both the PCA behavior and the fact that no order is imposed. Different ways to derive learning rules from the optimization of the objective function are presented. The role of the terms in the learning rules obtained from these derivations is explored.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/18/2020

Improved Convergence Speed of Fully Symmetric Learning Rules for Principal Component Analysis

Fully symmetric learning rules for principal component analysis can be d...
research
04/28/2022

Derivation of Learning Rules for Coupled Principal Component Analysis in a Lagrange-Newton Framework

We describe a Lagrange-Newton framework for the derivation of learning r...
research
03/25/2020

Derivation of Coupled PCA and SVD Learning Rules from a Newton Zero-Finding Framework

In coupled learning rules for PCA (principal component analysis) and SVD...
research
11/30/2015

Optimization theory of Hebbian/anti-Hebbian networks for PCA and whitening

In analyzing information streamed by sensory organs, our brains face cha...
research
02/10/2021

A Neural Network with Local Learning Rules for Minor Subspace Analysis

The development of neuromorphic hardware and modeling of biological neur...
research
03/02/2015

A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data

Neural network models of early sensory processing typically reduce the d...
research
05/10/2019

Refined Complexity of PCA with Outliers

Principal component analysis (PCA) is one of the most fundamental proced...

Please sign up or login with your details

Forgot password? Click here to reset