Decoding Stacked Denoising Autoencoders

05/10/2016
by   Sho Sonoda, et al.
0

Data representation in a stacked denoising autoencoder is investigated. Decoding is a simple technique for translating a stacked denoising autoencoder into a composition of denoising autoencoders in the ground space. In the infinitesimal limit, a composition of denoising autoencoders is reduced to a continuous denoising autoencoder, which is rich in analytic properties and geometric interpretation. For example, the continuous denoising autoencoder solves the backward heat equation and transports each data point so as to decrease entropy of the data distribution. Together with ridgelet analysis, an integral representation of a stacked denoising autoencoder is derived.

READ FULL TEXT

page 13

page 18

page 19

research
02/16/2021

Training Stacked Denoising Autoencoders for Representation Learning

We implement stacked denoising autoencoders, a class of neural networks ...
research
07/07/2019

Stacked autoencoders based machine learning for noise reduction and signal reconstruction in geophysical data

Autoencoders are neural network formulations where the input and output ...
research
06/17/2016

SMS Spam Filtering using Probabilistic Topic Modelling and Stacked Denoising Autoencoder

In This paper we present a novel approach to spam filtering and demonstr...
research
12/11/2019

Blind Denoising Autoencoder

The term blind denoising refers to the fact that the basis used for deno...
research
05/18/2023

High-dimensional Asymptotics of Denoising Autoencoders

We address the problem of denoising data from a Gaussian mixture using a...
research
12/31/2018

Soft-Autoencoder and Its Wavelet Shrinkage Interpretation

Deep learning is a main focus of artificial intelligence and has greatly...
research
04/16/2020

Distributed Evolution of Deep Autoencoders

Autoencoders have seen wide success in domains ranging from feature sele...

Please sign up or login with your details

Forgot password? Click here to reset