Improved Auto-Encoding using Deterministic Projected Belief Networks

09/14/2023
by   Paul M. Baggenstoss, et al.
0

In this paper, we exploit the unique properties of a deterministic projected belief network (D-PBN) to take full advantage of trainable compound activation functions (TCAs). A D-PBN is a type of auto-encoder that operates by "backing up" through a feed-forward neural network. TCAs are activation functions with complex monotonic-increasing shapes that change the distribution of the data so that the linear transformation that follows is more effective. Because a D-PBN operates by "backing up", the TCAs are inverted in the reconstruction process, restoring the original distribution of the data, thus taking advantage of a given TCA in both analysis and reconstruction. In this paper, we show that a D-PBN auto-encoder with TCAs can significantly out-perform standard auto-encoders including variational auto-encoders.

READ FULL TEXT
research
04/25/2022

Trainable Compound Activation Functions for Machine Learning

Activation functions (AF) are necessary components of neural networks th...
research
04/13/2021

Maximum Entropy Auto-Encoding

In this paper, it is shown that an auto-encoder using optimal reconstruc...
research
02/18/2020

A Neural Network Based on First Principles

In this paper, a Neural network is derived from first principles, assumi...
research
01/16/2013

Saturating Auto-Encoders

We introduce a simple new regularizer for auto-encoders whose hidden-uni...
research
04/25/2022

Using the Projected Belief Network at High Dimensions

The projected belief network (PBN) is a layered generative network (LGN)...
research
02/10/2016

A Theory of Generative ConvNet

We show that a generative random field model, which we call generative C...
research
04/10/2021

Latent Code-Based Fusion: A Volterra Neural Network Approach

We propose a deep structure encoder using the recently introduced Volter...

Please sign up or login with your details

Forgot password? Click here to reset