On denoising autoencoders trained to minimise binary cross-entropy

08/28/2017
by   Antonia Creswell, et al.
0

Denoising autoencoders (DAEs) are powerful deep learning models used for feature extraction, data generation and network pre-training. DAEs consist of an encoder and decoder which may be trained simultaneously to minimise a loss (function) between an input and the reconstruction of a corrupted version of the input. There are two common loss functions used for training autoencoders, these include the mean-squared error (MSE) and the binary cross-entropy (BCE). When training autoencoders on image data a natural choice of loss function is BCE, since pixel values may be normalised to take values in [0,1] and the decoder model may be designed to generate samples that take values in (0,1). We show theoretically that DAEs trained to minimise BCE may be used to take gradient steps in the data space towards regions of high probability under the data-generating distribution. Previously this had only been shown for DAEs trained using MSE. As a consequence of the theory, iterative application of a trained DAE moves a data sample from regions of low probability to regions of higher probability under the data-generating distribution. Firstly, we validate the theory by showing that novel data samples, consistent with the training data, may be synthesised when the initial data samples are random noise. Secondly, we motivate the theory by showing that initial data samples synthesised via other methods may be improved via iterative application of a trained DAE to those initial samples.

READ FULL TEXT

page 3

page 4

page 5

research
04/26/2022

Hybridised Loss Functions for Improved Neural Network Generalisation

Loss functions play an important role in the training of artificial neur...
research
05/16/2023

Outage Performance and Novel Loss Function for an ML-Assisted Resource Allocation: An Exact Analytical Framework

Machine Learning (ML) is a popular tool that will be pivotal in enabling...
research
08/31/2021

A manifold learning perspective on representation learning: Learning decoder and representations without an encoder

Autoencoders are commonly used in representation learning. They consist ...
research
02/14/2023

Cauchy Loss Function: Robustness Under Gaussian and Cauchy Noise

In supervised machine learning, the choice of loss function implicitly a...
research
07/08/2019

Residual Entropy

We describe an approach to improving model fitting and model generalizat...
research
06/06/2018

Spatial Frequency Loss for Learning Convolutional Autoencoders

This paper presents a learning method for convolutional autoencoders (CA...
research
08/12/2018

Denoising of 3-D Magnetic Resonance Images Using a Residual Encoder-Decoder Wasserstein Generative Adversarial Network

Structure-preserved denoising of 3-D magnetic resonance images (MRI) is ...

Please sign up or login with your details

Forgot password? Click here to reset