Relational Autoencoder for Feature Extraction

02/09/2018 ∙ by Qinxue Meng, et al. ∙ University of Technology Sydney Queen's University New South Wales Government 0

Feature extraction becomes increasingly important as data grows high dimensional. Autoencoder as a neural network based feature extraction method achieves great success in generating abstract features of high dimensional data. However, it fails to consider the relationships of data samples which may affect experimental results of using original and new features. In this paper, we propose a Relation Autoencoder model considering both data features and their relationships. We also extend it to work with other major autoencoder models including Sparse Autoencoder, Denoising Autoencoder and Variational Autoencoder. The proposed relational autoencoder models are evaluated on a set of benchmark datasets and the experimental results show that considering data relationships can generate more robust features which achieve lower construction loss and then lower error rate in further classification compared to the other variants of autoencoders.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

As data becomes increasingly high-dimensional such as genomic information, images, videos and text, reducing dimensionality to generate a high-level representation is considered not only as an important but also necessary data preprocessing step. This is because although machine learning models should, theoretically, be able to work on any number of features, high-dimensional datasets always bring about a series of problems including over-fitting, high-computational complexity and overly complicated models, which gives rise to a well known issue - curse of dimensionality 

[1]. Another reason for reducing dimensionality is that high-level representations can help people better understand the intrinsic structure of data.

Various methods of dimensionality reduction [2, 3, 4, 5, 6]

have been proposed and they can be roughly divided into two categories: feature selection and feature extraction. The major difference between them lies in using part or all of input features. For example, considering a given dataset 

with a feature set , feature selection finds a subset of features  from all features  ( ) and the number of selected features is smaller than the original ( ) while feature extraction generates a new set of features  which are combinations of the original ones . Generally new features are different from original features ( ) and the number of new features, in most cases, is smaller than original features ( ). Although feature selection methods, such as Subset Selection [7]

and Random Forest 

[8], are effective in filtering out unnecessary features, if all input features contribute to final results to some extent, feature selection is more likely to give rise to information loss than feature extraction because it fails to use all of them. Therefore, the research focus of dimensionality reduction has been moving towards feature extraction, though not totally.

The initial feature extraction methods [9]

are proposed based on projection, mapping input features in the original high-dimensional space to a new low-dimensional space by minimizing information loss. The most famous projection methods are Principal Component Analysis (PCA) 

[10] and Linear Discriminant Analysis (LDA) [11]

. The former one is an unsupervised method, projecting original data into its principal directions by maximizing variance. The latter one is a supervised method to find a linear subspace by optimizing discriminating data from classes. The major drawback of these methods is that they do projection linearly. Subsequent studies 

[12, 13, 14, 15, 16] overcome this issue by employing non-linear kernel functions.

However, projecting from a high-dimensional space to a low-dimensional one is hard to maintain relationships among data samples which gives rise to change the relationship among data samples so as to generate different results of using original and new features. Therefore, recently, exploiting data relationships has been a major focus of dimensionality reduction research. Multidimensional Scaling (MDS) [17] considers the relationship of data samples by transforming distances into similarities for visualization. ISOMAP [18] learns low-dimensional features by retaining the geodesic distances among data samples. Locally Linear Embedding (LLE) [19] preserves data relationships by embedding local neighbourhood when mapping to low-dimensional space. Laplacian Eigenmaps (LE) [20] minimizes the pairwise distances in the projected space by weighting the corresponding distances in the original space. One issue of these methods is that they generally have a fixed or predefined way of capturing local data relationships in the high-dimensional space which may not be accurate and valid in a low-dimensional space. Another issue is that the major work of these studies maps data from high-dimensional space to low-dimensional space by extracting features once instead of stacking them to gradually generate deeper levels of representation.

Autoencoders [21] use artificial neural networks to reduce dimensionality by minimizing reconstruction loss. Thus, it is easy to stack by adding hidden layers. Because of this, autoencoders and extensions such as Sparse Autoencoders [22], Denoising Autoencoders [23]

, Contractive Autoencoders 

[24] and Variational Autoencoders [25]

, demonstrate a promising ability to extract meaningful features, especially in image processing and natural language processing. Yet, these methods only consider data reconstruction and ignore to explicitly model its relationship. A recent study 

[26] proposes a Generalized Autoencoder (GAE) which focuses on modifying autoencoder to consider data relationship by adding a weighted relational function. Specifically, it minimizes the weighted distances between reconstructed instances and the original ones. Although subsequent applications [27, 28, 29, 30, 31, 32] confirm that considering data relationship can improve the performance of autoencoder in dimensionality reduction, the Generalized Autoencoder model has some drawbacks. Firstly, determining the weights for each pair-wise distance is very challenging and this practice is risky in converting GAE into a supervised model when some relationshps are over emphasized and others are neglected. Meanwhile focusing on minimizing the loss of data relationships but ignoring the loss of data itself is very likely to lose information. For example, some data attributes contribute more to data relationships than the other. Focusing on reconstructing data relationships may emphasize part of attributes and ignores the others.

To deal with above issues, we proposed a Relation Autoencoder (RAE) for dimensionality reduction and the contributions are summarized as

  1. RAE considers both data features and their relationships by minimizing the loss of them.

  2. To alleviate the increased computational complex of considering data relationships, the proposed Relational Autoencoder model can filter out weak and trivial relationships by adding activation functions.

  3. The relationship model has been extended to work with other baseline autoencoder models including Sparse Autoencoder (SAE), Denoising Autoencoder (DAE) and Variational Autoencoder (VAE).

In this paper, we comprehensively evaluate the proposed Relational Autoencoder on a set of benchmark datasets. Specifically, in the experiment, we firstly compare the performance of our proposed model with another recently published relationship-based autoencoder model (GAE). Then we compare the performance of these baseline autoencoder models with their extended versions.

The rest of the paper starts from reviewing related work of autoencoders in Section II followed by problem definition and basic autoencoders in Section III. Section IV presents the proposed Relational Autoencoder model and its extensions of the other baseline autoencoder models. The experimental datasets and results are covered in Section V and Section VI respectively. Finally, we discuss the proposed method and conclude the paper in Section VII.

Ii Literature review

Autoencoder was initially introduced in the later 1980s [33] as a linear feature extraction method. Unlike latent space approaches which map data into a high dimensional space, autoencoder aims to learn a simpler representation of data by mapping the original data into a low-dimensional space. The main principle of autoencoder follows from the name. “Auto” presents that this method is unsupervised and “encoder” means it learns another representation of data. Specifically, autoencoder learns a encoded representation by minimizing the loss between the original data and the data decoded from this representation. In 1989, Baldi and Hornik [34] investigated the relationship between a one-layer autoencoder and principal component analysis (PCA). They found that the new compressed features learnt by linear autoencoder is similar to principal components. However, the computational complexity of training an autoencoder is much higher than PCA because the major computational complexity of PCA is based on matrix decomposition. This limits the application of linear autoencoders.

Later, with the involvement of non-linear activation functions, autoencoder becomes non-linear and is capable of learning more useful features [35] than linear feature extraction methods. Non-linear autoencoders are not advantaged than the other non-linear feature extraction methods as it takes long time to train them.

The recent revival of interest in autoencoders is due to the success of effectively training deep architectures because traditional gradient-based optimization strategies are not effective when hidden layers are stacked multiple times with non-linearities. In 2006, Hinton [36]

empirically trained a deep network architecture by sequentially and greedily optimizing weights of each layer in a Restricted Boltzmann Machine (RBM) model to achieve global gradient-based optimization. Bengio 

[37] achieved success in training a stacked autoencoder on the MNIST dataset with a similar greedy layerwise approach. This training approach overcomes the problematic non-convex optimization which prevents deep network structure. Subsequent studies show that stacked autoencoder model can learn meaningful, abstract features and thus achieve better classification results in high-dimensional data, such as images and texts [38, 39, 40, 41]. The training efficiency of stacked autoencoder is further improved by changing layer weight initialization [42].

As the training efficiency improves, autoencoder becomes increasingly popular. Soon it was found that as layers are stacked, the weights of deeper layers increase sharply because of matrix multiplication and then the importances of these weights become larger than the initial input features. This overfitting issue gives rise to the fact that representations of deep layers are more likely dependent on the network structure instead of the initial input features. Poultney et al. [43] presented an idea to increase the sparsity of network structure so as to limit the increasing of weights. Goodfellow et al. [44]

added a regularization term in the loss function of autoencoder to impose a penalty on large weights. Vincent et al. 

[45] proposed a Denoising Autoencoder (DAE) to solve this issue by adding noises in the input. The proposed DAE model aims not to reconstruct the original input but to reconstruct a corrupted input which is typically corrupted by adding Gaussian noise. The previous studies have no control of the distribution of hidden-layer representations. So Variational Autoencoder (VAE) [25] is proposed to generate desired distributions of representations in hidden layers.

Autoencoders are an unsupervised dimension reduction process and the reduced high-level features generally contain the major information of original data. This makes autoencoders not sensitive to slight variations. To make them sensitive to slight variations, Rifai et al. [24] proposed a Contractive Autoencoder (CAE). In fact, maintaining mutual relationships among data samples works as an effective way of making autoencoders sensitive to slight variations. Because of this, Wang et al. [26] proposed a Generalized Autoencoder (GAE) targeting at reconstructing the data relationships instead of the data features. A series of applications [27, 28, 29, 31, 32] of GAE confirm that maintaining data relationships can achieve better results but results are highly dependent on how to define and choose distance weights. The practice of having sophisticated, pre-defined distance weights is risky in converting GAE into a supervised model because of assigning high weights to selected relationships is very subjective and may be biased. As a result, we propose a Relation Autoencoder (RAE) for dimensionality reduction by minimising both the loss of data features and relationships.

Iii Preliminaries

This section begins with a definition of feature extraction followed by a description of basic autoencoder models before presenting our proposed Relational Autoencoder (RAE) model.

Fig. 1: Traditional autoencoder generates new feature set  by minimizing the reconstruction loss between and .

Iii-a Feature Extraction

Feature extraction transforms data from the original, high-dimensional space to a relatively low-dimensional space. This transformation can be linear or nonlinear. Specifically, considering a given dataset  with  samples and  features, the original feature set is denoted as and a feature extraction function  generates new feature set  where . Generally, the objective functions of feature extraction methods minimize the difference between the original space and the new space and changing the objective functions can convert feature extraction from unsupervised methods to supervised methods such as Linear Discriminant Analysis (LDA).

Fig. 2: Relational autoencoder generates new feature set  by minimizing the loss of reconstructing and maintaining relationships among .

Iii-B Basic Autoencoder

Simply, an autoencoder (AE) is composed of two parts, an encoder and a decoder. Considering a data sample with  samples and  features, the output of encoder  represents the reduced representation of  and the decoder is tuned to reconstruct the original dataset  from the encoder’s representation  by minimizing the difference between  and  as illustrated in Fig. 1. Specifically, the encoder is a function  that maps an input 

to hidden representation 

. The process is formulated as

(1)

where is a nonlinear activation function and if it is an identity function, the autoencoder will do linear projection. The encoder is parameterized by a weight matrix

and a bias vector 

.

The decoder function maps hidden representation back to a reconstruction :

(2)

where is the decoder’s activation function, typically either the identity (yielding linear reconstruction) or a sigmoid. The decoder’s parameters are a bias vector and matrix . In this paper we only explore the tied weights case where .

Training an autoencoder involves finding parameters that minimize the reconstruction loss on the given dataset and the objective function is given as

(3)

For linear reconstruction, the reconstruction loss () is generally from the squared error:

(4)

For nonlinear reconstruction, the reconstruction loss () is generally from cross-entropy:

(5)

where , and .

Iii-C Stacked Autoencoder

As a neural network model based feature extraction method, a major advantage of Autoencoder is that it is easy to stack for generating different levels of new features to represent original ones by adding hidden layers. For an -layer stacked autoencoder, the process of encoding is

(6)

where is the encoding function of layer . The corresponding decoding function is

(7)

where is the decoding function of layer and the Stacked Autoencoder can be trained by greedy layerwise feed-forward approach.

Iv Relational Autoencoder (RAE)

The traditional autoencoder generates new features by minimizing the reconstruction loss of the data. This motivates us to propose a Relational Autoencoder (RAE) to minimize the reconstruction loss of both data and their relationships. The objective function of RAE is defined as

(8)

where represents the relationship among data samples in and represents the relationship among data samples in as illustrated in Fig. 2. Parameter is a scale parameter to control the weights of the data reconstruction loss and the relationship reconstruction loss and is the neural network parameter of the autoencoder.

Data relationship can be modelled in multiple ways. In this paper, we present data relationship by their similarities where is the multiplication of and . Then the objective function is

(9)

In order to improve computational efficiency and filter out unnecessary relationships, we use activation functions to control weights of similarities. In this paper, we use the rectifier function [46] to achieve this.

(10)

where is a threshold to filter out weak and trivial relationships. Then the objective function of RAE is defined as

(11)

In this paper, we choose the loss function  as squared error.

The pseudo-code of the proposed Relational Autoencoder (RAE) is described in Algorithm 1 where the input parameters are the input dataset (), parameter of the rectifier function (), regularization weight (

), the number of hidden neurons in layers (

), activation function () and a threshold () to determine whether the loss has converged. Among them, is a vector and is the number of neurons of th layer where is from to . The proposed RAE model starts with defining a loss function (). Then it initializes a neural network model and iteratively add layers into the network model based on and 

. The network is trained by stochastic gradient descent (SGD) and updates 

in the loss function () until the difference between current-loop loss (loss_c) and the previous-loop loss (loss_p) is smaller than a predefined threshold () which means it has converged.

1:function RAE()
2:     L = Equ. 11;
3:     loss_p = 0;
4:     loss_c = 0;
5:     Model = NN(); initialize a neural network model
6:     for  to  do
7:         Model.addLayer(, )
8:     end for
9:     while True do
10:         loss_c = Model.train(L, SGD);
11:         if (loss_c - loss_p then
12:              Break;
13:         else
14:              loss_p = loss_c;
15:         end if
16:     end while
17:return Model
18:end function
Algorithm 1 Iterative learning procedure of RAE

Iv-a Extension to Sparse Autoencoder (SAE)

The objective function of autoencoder reconstructs the input. During the training process, the weights of hidden neurons are the combination of previous layers and these weights increase as layers get deep. High weights of hidden layers make the generated features more dependent on the network structure rather than the input. In order to avoid this, Sparse Autoencoder (SAE) imposes weight-decay regularization so as to keep neuron weights small. The objective function of SAE is

(12)

where  is a weight-decay regularization term to guarantee weight matrix  having small elements. Parameter  is a hyper-parameter to control the strength of the regularization. We extend Sparse Autoencoder (SAE) to a Relational Sparse Autoencoder (RSAE) model by considering the data relationships. The objective function is defined as

(13)

Iv-B Extension to Denoising Autoencoder (DAE)

Denoising Autoencoder (DAE) is proposed to generate better features by imposing an alternative form of regularization. The main principle of DAE is to corrupt a part of the input features of a given dataset before sending it to an autoencoder model, train the network to reconstruct the destroyed input and then minimize the loss between the reconstructed and original . The objective function of DAE is

(14)

where is corrupted from a stochastic corruption process and the objective function is optimized by stochastic gradient descent. Here we extend the proposed Relation Autoencoder model to a Relational Denoising Autoencoder (RDAE) model by considering data relationships. The model minimizes the loss between data  and corrupted data  and the loss between data relationship  and corrupted data relationship . The objective function is defined as

(15)

In this paper, we consider corruptions as additive isotropic Gaussian noise: where and

is the standard deviation of

.

Iv-C Extension to Variational Autoencoder (VAE)

Variational Autoencoder (VAE) is a different to the other types of autoencoder model. It makes a strong assumption concerning the distribution of latent neurons and tries to minimize the difference between a posterior distribution and the distribution of latent neurons with difference measured by the Kullback-Leibler divergence 

[47]. The objective function of VAE is

(16)

where

is the encoding process to calculate the probability of

based on the input while is the decoding process to reconstruct . Generally

is a predefined Gaussian distribution, such as

. Therefore the extended Relational Variational Autoencoder (RVAE) is defined as

(17)

V Datasets

The datasets used in this paper to evaluate the proposed Relational Autoencoder are two image datasets, MNIST111http://yann.lecun.com/exdb/mnist/ and CIFAR-10222https://www.cs.toronto.edu/ kriz/cifar.html. The MNIST dataset is a well known database of handwritten digits which contains a training set of 60,000 examples and a test set of 10,000 samples. The CIFAR-10 dataset contains 60,000 images which are labelled with 10 classes with each class having 6,000 images. The training set of CIFAR-10 has 50,000 samples while the test set has 10,000 samples.

Vi Experiment

The experiment firstly compares the performance of the proposed Relational Autoencoder (RAE) model against basic autoencoder (BAE) and Generative Autoencoder (GAE) in reconstruction loss and classification accuracy [48, 49]

. Then we compare the performance of the extended Relational Sparse Autoencoder (RSAE), Relational Denoising Autoencoder (RDAE) and Relational Variational Autoencoder (RVAE) with their corresponding original versions to estimate the effects of considering relationship in the depth.

(a) MNIST
(b) CIFAR-10
Fig. 3: The comparative results of reconstruction loss on the MNIST and CIFAR-10 dataset among Relational Autoencoder (RAE), basic autoencoder (BAE) and Generative Autoencoder (GAE). It describes how information loss changes as the scale parameter changes.

Vi-a Experiment Setting

All autoencoder models were tested with the same configuration on the same dataset. Specifically, we use tied weights  () in building network structure. The activation function of each layer is sigmoid for both encoder and decoder. The optimization functions of reconstruction loss are listed and described in Section IV

and they are trained by stochastic gradient descent (SGD) for 400 epochs. The number of neurons of each layer is determined as

(18)

where is the number of neurons in layer and is the number of neurons in layer . As the network structure goes deeper, the number of neurons gradually decreases to a predefined threshold . To increase the training efficiency, we use Xavier [50] method to initialize layer weights

(19)

where is weight of th neuron in layer  and

is the uniform distribution in the interval (

,) and is the number of neurons in the previous layer.

All autoencoders are trained based on their loss function respectively. But in order to compare the performance of them in feature extraction, the reconstruction loss is measured by Mean Squared Error (MSE). Classification is done by softmax regression based on the extracted features from autoencoder models. To estimate the generation of models, we use 10-fold cross validation in both unsupervised feature extraction and classification.

Vi-B Comparing to BAE and GAE

We firstly compare the performance of our proposed Relational Autoencoder (RAE) with basic autoencoder (BAE) and Generative Autoencoder (GAE) in terms of reconstruction on MNIST and CIFAR-10. For GAE, we set similarity to be the weight of each pairwise relationship. In the experiment, we explore different values of scaling parameter  which determines the weights of reconstructing data and relationship (Equation 11). The value of  ranges from 0 to 1 in step of 0.02. Meanwhile, because BAE and GAE has no such parameter, their reconstruction loss is not changed as  changes and the results are illustrated in Fig. 3.

We observe that generally the reconstruction loss of GAE is less than BAE which confirms that considering data relationship is able to reduce information loss in the process of encoding and decoding. The performance of the proposed RAE autoenocder model changes as the scaling parameter  changes. Generally, it starts to generate similar results with BAE because is small focusing on reconstructing data rather than relationships. As  increases, the performance of GAE continuously decreases until to a tough. Then as  keeps increasing, the information loss increases as well because the model begins to over-emphasize relationship reconstruction. It is interesting to see that even if the proposed RAE model considers relationship only, the performance of RAE is better than GAE as RAE uses the activation function to filter out weak relationships. Thus the performance of the proposed RAE autoencoder model is determined by scaling parameter  and the value of  depends on the dataset.

Another interesting finding is that the proposed RAE model achieves better results in CIFAR-10 than MNIST. This may be because the CIFAR-10 contains more complex images than MNIST and for complex datasets, maintaining data relationship is of much importance. For classification, RAE achieves the lowest error rate (3.8%) followed by GAE (5.7%) and BAE (8.9%).

Model       MNIST    CIFAR-10
Loss Error Loss Error
RAE 0.677 3.8% 0.281 12.7%
BAE 0.813 8.9% 0.682 15.6%
GAE 0.782 5.7% 0.574 14.9%
RSAE 0.296 1.8% 0.292 13.4%
SAE 0.312 2.2% 0.331 14.2%
RDAE 0.217 1.1% 0.216 10.5%
DAE 0.269 1.6% 0.229 11.7%
RVAE 0.183 0.9% 0.417 17.3%
VAE 0.201 1.2% 0.552 21.2%
TABLE I: Experimental results of autoencoder models

Vi-C Comparing extend autoencoder variations to original ones

In this experiment, we compare the performance of the extended variants of autoencoders to their original versions and detailed results are listed in Table I. We observe that considering data relationships contributes to decreasing reconstruction loss suggesting that autoencoders can generate more robust and meaningful features with less information loss and these features are the key to achieve good classification results.

Vii Conclusion

In this paper, we propose a Relation Autoencoder model which can extract high-level features based on both data itself and their relationships. We extend this principle to other major autoencoder models including Sparse Autoencoder, Denoising Autoencoder and Variational Autoencoder so as to enable them consider data relationships as well. The proposed relational autoencoder models are evaluated on MNIST and CIFAR-10 datasets and the experimental results show that considering data relationships can decrease reconstruction loss and therefore generate more robust features. Better features contribute to improved classification results.

References

  • [1] P. Indyk and R. Motwani, “Approximate nearest neighbors: towards removing the curse of dimensionality,” in

    Proceedings of the thirtieth annual ACM symposium on Theory of computing

    .   ACM, 1998, pp. 604–613.
  • [2] F. Wang and J. Sun, “Survey on distance metric learning and dimensionality reduction in data mining,” Data Mining and Knowledge Discovery, vol. 29, no. 2, pp. 534–564, 2015.
  • [3] J. P. Cunningham and Z. Ghahramani, “Linear dimensionality reduction: Survey, insights, and generalizations,” Journal of Machine Learning Research, vol. 16, pp. 2859–2900, 2015.
  • [4] N. Akkarapatty, A. Muralidharan, N. S. Raj, and P. Vinod, “Dimensionality reduction techniques for text mining,” Collaborative Filtering Using Data Mining and Analysis, p. 49, 2016.
  • [5] J. Wu, Z. Hong, S. Pan, X. Zhu, Z. Cai, and C. Zhang, “Multi-graph-view learning for graph classification,” in 2014 IEEE International Conference on Data Mining, 2014, pp. 590–599.
  • [6]

    J. Wu, Z. Cai, S. Zeng, and X. Zhu, “Artificial immune system for attribute weighted naive bayes classification,” in

    The 2013 International Joint Conference on Neural Networks (IJCNN), 2013, pp. 1–8.
  • [7] G. H. John, R. Kohavi, K. Pfleger et al., “Irrelevant features and the subset selection problem,” in Machine learning: proceedings of the eleventh international conference, 1994, pp. 121–129.
  • [8] A. Liaw and M. Wiener, “Classification and regression by randomforest,” R news, vol. 2, no. 3, pp. 18–22, 2002.
  • [9] S. Khalid, T. Khalil, and S. Nasreen, “A survey of feature selection and feature extraction techniques in machine learning,” in Science and Information Conference (SAI), 2014.   IEEE, 2014, pp. 372–378.
  • [10] U. Demšar, P. Harris, C. Brunsdon, A. S. Fotheringham, and S. McLoone, “Principal component analysis on spatial data: an overview,” Annals of the Association of American Geographers, vol. 103, no. 1, pp. 106–128, 2013.
  • [11] A. Sharma and K. K. Paliwal, “Linear discriminant analysis for the small sample size problem: an overview,” International Journal of Machine Learning and Cybernetics, vol. 6, no. 3, pp. 443–454, 2015.
  • [12] B. Schölkopf, A. Smola, and K.-R. Müller, “Kernel principal component analysis,” in International Conference on Artificial Neural Networks.   Springer, 1997, pp. 583–588.
  • [13] J.-M. Lee, C. Yoo, S. W. Choi, P. A. Vanrolleghem, and I.-B. Lee, “Nonlinear process monitoring using kernel principal component analysis,” Chemical Engineering Science, vol. 59, no. 1, pp. 223–234, 2004.
  • [14] P. Honeine, “Online kernel principal component analysis: A reduced-order model,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 9, pp. 1814–1826, 2012.
  • [15] L. Lebart, “Correspondence analysis,” in Data Science, Classification, and Related Methods: Proceedings of the Fifth Conference of the International Federation of Classification Societies (IFCS-96), Kobe, Japan, March 27–30, 1996.   Springer Science & Business Media, 2013, p. 423.
  • [16] D. Lopez-Paz, S. Sra, A. J. Smola, Z. Ghahramani, and B. Schölkopf, “Randomized nonlinear component analysis.” in ICML, 2014, pp. 1359–1367.
  • [17] F. W. Young, Multidimensional scaling: History, theory, and applications.   Psychology Press, 2013.
  • [18] J. B. Tenenbaum, V. De Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” science, vol. 290, no. 5500, pp. 2319–2323, 2000.
  • [19] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000.
  • [20] M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural computation, vol. 15, no. 6, pp. 1373–1396, 2003.
  • [21] D. E. Rumelhart and J. L. McClelland, Learning Internal Representations by Error Propagation.   MIT Press, 1986, pp. 318–362.
  • [22]

    J. Deng, Z. Zhang, E. Marchi, and B. Schuller, “Sparse autoencoder-based feature transfer learning for speech emotion recognition,” in

    Affective Computing and Intelligent Interaction (ACII), 2013 Humaine Association Conference on.   IEEE, 2013, pp. 511–516.
  • [23] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” in Proceedings of the 25th international conference on Machine learning.   ACM, 2008, pp. 1096–1103.
  • [24] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive auto-encoders: Explicit invariance during feature extraction,” in Proceedings of the 28th international conference on machine learning (ICML-11), 2011, pp. 833–840.
  • [25] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
  • [26] W. Wang, Y. Huang, Y. Wang, and L. Wang, “Generalized autoencoder: a neural network framework for dimensionality reduction,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops

    , 2014, pp. 490–497.
  • [27] Z. Camlica, H. Tizhoosh, and F. Khalvati, “Autoencoding the retrieval relevance of medical images,” in Image Processing Theory, Tools and Applications (IPTA), 2015 International Conference on.   IEEE, 2015, pp. 550–555.
  • [28] S. Stober, A. Sternin, A. M. Owen, and J. A. Grahn, “Deep feature learning for eeg recordings,” arXiv preprint arXiv:1511.04306, 2015.
  • [29] L. Gao, J. Song, X. Liu, J. Shao, J. Liu, and J. Shao, “Learning in high-dimensional multimedia data: the state of the art,” Multimedia Systems, pp. 1–11, 2015.
  • [30] J. Wu and Z. Cai, “A naive bayes probability estimation model based on self-adaptive differential evolution,” J. Intell. Inf. Syst., vol. 42, no. 3, pp. 671–694, 2014.
  • [31] Y. Wang, H. Yao, S. Zhao, and Y. Zheng, “Dimensionality reduction strategy based on auto-encoder,” in Proceedings of the 7th International Conference on Internet Multimedia Computing and Service.   ACM, 2015, p. 63.
  • [32] L. Meng, S. Ding, and Y. Xue, “Research on denoising sparse autoencoder,” International Journal of Machine Learning and Cybernetics, pp. 1–11, 2016.
  • [33] D. Rumerhart, G. Hinton, and R. Williams, “Learning representations by back-propagation errors,” Nature, vol. 323, pp. 533–536, 1986.
  • [34] P. Baldi and K. Hornik, “Neural networks and principal component analysis: Learning from examples without local minima,” Neural networks, vol. 2, no. 1, pp. 53–58, 1989.
  • [35] N. Japkowicz, S. J. Hanson, and M. A. Gluck, “Nonlinear autoassociation is not equivalent to pca,” Neural computation, vol. 12, no. 3, pp. 531–545, 2000.
  • [36] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.
  • [37] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio, “An empirical evaluation of deep architectures on problems with many factors of variation,” in Proceedings of the 24th international conference on Machine learning.   ACM, 2007, pp. 473–480.
  • [38] K. Jarrett, K. Kavukcuoglu, Y. Lecun et al., “What is the best multi-stage architecture for object recognition?” in 2009 IEEE 12th International Conference on Computer Vision.   IEEE, 2009, pp. 2146–2153.
  • [39] J.-L. Vincent, J. Rello, J. Marshall, E. Silva, A. Anzueto, C. D. Martin, R. Moreno, J. Lipman, C. Gomersall, Y. Sakr et al., “International study of the prevalence and outcomes of infection in intensive care units,” Jama, vol. 302, no. 21, pp. 2323–2329, 2009.
  • [40] W. W. Ng, G. Zeng, J. Zhang, D. S. Yeung, and W. Pedrycz, “Dual autoencoders features for imbalance classification problem,” Pattern Recognition, vol. 60, pp. 875–889, 2016.
  • [41] H.-C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach, “Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4d patient data,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1930–1943, 2013.
  • [42]

    D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and S. Bengio, “Why does unsupervised pre-training help deep learning?”

    Journal of Machine Learning Research, vol. 11, no. Feb, pp. 625–660, 2010.
  • [43] C. Poultney, S. Chopra, Y. L. Cun et al.

    , “Efficient learning of sparse representations with an energy-based model,” in

    Advances in neural information processing systems, 2006, pp. 1137–1144.
  • [44] I. Goodfellow, H. Lee, Q. V. Le, A. Saxe, and A. Y. Ng, “Measuring invariances in deep networks,” in Advances in neural information processing systems, 2009, pp. 646–654.
  • [45] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion,” The Journal of Machine Learning Research, vol. 11, pp. 3371–3408, 2010.
  • [46] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  • [47] S. Kullback and R. A. Leibler, “On information and sufficiency,” The annals of mathematical statistics, vol. 22, no. 1, pp. 79–86, 1951.
  • [48] J. Wu, Z. Cai, and X. Zhu, “Self-adaptive probability estimation for naive bayes classification,” in The 2013 International Joint Conference on Neural Networks (IJCNN), 2013, pp. 1–8.
  • [49] J. Wu, S. Pan, X. Zhu, C. Zhang, and P. S. Yu, “Multiple structure-view learning for graph classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. PP, no. 99, pp. 1–16, 2018.
  • [50] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks.” in Aistats, vol. 9, 2010, pp. 249–256.