DeepAI AI Chat
Log In Sign Up

Autoencoder Node Saliency: Selecting Relevant Latent Representations

11/21/2017
by   Ya Ju Fan, et al.
Lawrence Livermore National Laboratory
0

The autoencoder is an artificial neural network model that learns hidden representations of unlabeled data. With a linear transfer function it is similar to the principal component analysis (PCA). While both methods use weight vectors for linear transformations, the autoencoder does not come with any indication similar to the eigenvalues in PCA that are paired with the eigenvectors. We propose a novel supervised node saliency (SNS) method that ranks the hidden nodes by comparing class distributions of latent representations against a fixed reference distribution. The latent representations of a hidden node can be described using a one-dimensional histogram. We apply normalized entropy difference (NED) to measure the "interestingness" of the histograms, and conclude a property for NED values to identify a good classifying node. By applying our methods to real data sets, we demonstrate the ability of SNS to explain what the trained autoencoders have learned.

READ FULL TEXT
01/30/2019

Distinguishing between Normal and Cancer Cells Using Autoencoder Node Saliency

Gene expression profiles have been widely used to characterize patterns ...
01/28/2022

Geometric instability of out of distribution data across autoencoder architecture

We study the map learned by a family of autoencoders trained on MNIST, a...
06/16/2017

Self-adaptive node-based PCA encodings

In this paper we propose an algorithm, Simple Hebbian PCA, and prove tha...
04/26/2018

From Principal Subspaces to Principal Components with Linear Autoencoders

The autoencoder is an effective unsupervised learning model which is wid...
11/04/2021

Symmetry-Aware Autoencoders: s-PCA and s-nlPCA

Nonlinear principal component analysis (nlPCA) via autoencoders has attr...