Stochastic Bottleneck: Rateless Auto-Encoder for Flexible Dimensionality Reduction

05/06/2020 ∙ by Toshiaki Koike-Akino, et al. ∙ MERL 6

We propose a new concept of rateless auto-encoders (RL-AEs) that enable a flexible latent dimensionality, which can be seamlessly adjusted for varying distortion and dimensionality requirements. In the proposed RL-AEs, instead of a deterministic bottleneck architecture, we use an over-complete representation that is stochastically regularized with weighted dropouts, in a manner analogous to sparse AE (SAE). Unlike SAEs, our RL-AEs employ monotonically increasing dropout rates across the latent representation nodes such that the latent variables become sorted by importance like in principal component analysis (PCA). This is motivated by the rateless property of conventional PCA, where the least important principal components can be discarded to realize variable rate dimensionality reduction that gracefully degrades the distortion. In contrast, since the latent variables of conventional AEs are equally important for data reconstruction, they cannot be simply discarded to further reduce the dimensionality after the AE model is trained. Our proposed stochastic bottleneck framework enables seamless rate adaptation with high reconstruction performance, without requiring predetermined latent dimensionality at training. We experimentally demonstrate that the proposed RL-AEs can achieve variable dimensionality reduction while achieving low distortion compared to conventional AEs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In many real-world applications, the raw data measurements (e.g., audio/speech, images, video, biological signals) often have very high dimensionality. Adequately handling high-dimensionality often requires the application of dimensionality reduction techniques Maaten:2009

that transform the original data into meaningful feature representations of reduced dimensionality. Such feature representations should reduce the dimensionality to the minimum number required to capture the salient properties of the data. Dimensionality reduction is vital in many machine learning applications, since one needs to mitigate the so-called “curse of dimensionality” 

Jimnez:1997 . In the past few decades, latent representation learning based on auto-encoders (AEs) Hinton:2006 ; Schloz:2008 ; Kramer:1991 ; DeMers:1993 ; Ng:2011 ; Vincent:2010 ; Doersch:2016 ; Sonderby:2016 has been widely used for dimensionality reduction, since this nonlinear technique has shown superior real-world performance compared to classical linear counterparts, such as principal component analysis (PCA).

One of the challenges in dimensionality reduction is to determine the optimal latent dimensionality that can sufficiently capture the data features required for particular applications. Although some regularization techniques, such as sparse AE (SAE) Ng:2011 and rate-distortion AE Giraldo:2013 , may be useful to self-adjust the effective dimensionality, there are no existing methods that provide a rateless property MacKay:2005 that allows for seamlessly adjustment of the latent dimensionality depending on varying distortion requirements for different downstream applications, without modification of the trained AE model. However, realizing a rateless AE is not straightforward, since traditional AEs typically learn nonlinear manifolds where the latent variables are equally important, unlike the linear manifold models used for PCA.

In this paper, we introduce a novel AE framework which can universally achieve flexible dimensionality reduction while achieving high performance. Motivated by the fact that the traditional PCA is readily adaptable to any dimension by just appending or dropping sorted principal components, we propose a stochastic bottleneck architecture to associate upper latent variables with higher-principal nonlinear features so that the user can freely discard the least-principal latent variables if desired. Our contributions are summarized below:

  • We introduce a new concept of rateless AEs designed for flexible dimensionality reduction.

  • A stochastic bottleneck framework is proposed to prioritize the latent space non-uniformly.

  • An extended regularization technique called TailDrop is considered to realize rateless AEs.

  • We discuss dropout distribution optimization under the principle of multi-objective learning.

  • We demonstrate that the proposed AEs achieve excellent distortion performance over the variable range of dimensionality in the standard MNIST and CIFAR-10 image datasets.

  • We evaluate AE models trained for a perceptual distortion measure based on structural similarity (SSIM) Wang:2004 as well as the traditional mean-square error (MSE) metric.

2 Rateless auto-encoder (RL-AE)

2.1 Dimensionality reduction

Figure 1: Auto-encoder architectures: (a) conventional bottleneck network, (b) sparse AE regularized by dropout, having probabilistically lower-dimension representation, (c) stochastic bottleneck, having two-dimensional regularization with non-identical dropout rates in both depth and width directions to realize rateless property by ordered-principal latent variables.

Due to the curse of dimensionality, representation learning to reduce the dimensionality is often of great importance to handle high-dimensional datasets in machine learning. To date, there have existed many algorithms for dimensionality reduction Maaten:2009

, e.g., PCA, kernel PCA, Isomap, maximum variance unfolding, diffusion maps, locally linear embedding, Laplacian eigenmaps, local tangent space analysis, Sammon mapping, locally linear coordination and manifold charting along with AE. Among all, AE 

Hinton:2006 ; Schloz:2008 ; Kramer:1991 ; DeMers:1993 ; Ng:2011 ; Vincent:2010 ; Doersch:2016 ; Sonderby:2016

has shown its high potential to learn lower-dimensional latent variables required in the nonlinear manifold underlying the datasets. AE is an artificial neural network having a bottleneck architecture as illustrated in Fig. 

1(a), where -dimensional data is transformed to -dimensional latent representation (for ) via an encoder network. The latent variables should contain sufficient feature capable of reconstructing the original data through a decoder network.

From the original data , the corresponding latent representation , with a reduced dimensionality is generated by the encoder network as , where denotes the encoder network parameters. The latent variables should adequately capture the statistical geometry of the data manifold, such that the decoder network can reconstruct the data as , where denotes the decoder network parameters and . The encoder and decoder pair are jointly trained to minimize the reconstruction loss (i.e., distortion), as given by:

(1)

where the loss function

is chosen to quantify the distortion (e.g., MSE) between and .

2.2 Motivation: rateless property

By analogy, AEs are also known as nonlinear PCA (NLPCA) Schloz:2008 ; Kramer:1991 ; DeMers:1993 . If we consider a simplified case where there is no nonlinear activation in the AE model, then the encoder and decoder functions will reduce to simple affine transformations. Specifically, the encoder becomes where trainable parameters are the linear weight and the bias . Likewise, the decoder becomes with

. If the distortion measure is MSE, then the optimal linear AE coincides with the classical PCA when the data follows the multivariate Gaussian distribution according to the Karhunen–Loève theorem.

To illustrate, assume Gaussian data with mean and covariance , which has the eigen-decomposition: , where

is the unitary eigenvectors matrix and

is the diagonal matrix of ordered eigenvalues

. For PCA, the encoder uses principal eigenvectors to project the data onto an -dimensional latent subspace with and , where

denotes the incomplete identity matrix with diagonal elements equal to one and zero elsewhere. The decoder uses the transposed projection with

and . The MSE distortion is given by

(2)

Since the eigenvalues are sorted, the distortion gracefully degrades as principal components are removed in the corresponding order. Of course, the MSE would be considerably worse if an improper ordering (e.g., reversed) is used.

One of the benefits of classical PCA is its graceful rateless property due to the ordering of principal components. Similar to rateless channel coding such as fountain codes MacKay:2005 , PCA does not require a pre-determined compression ratio for dimensionality reduction (instead it can be calculated with ), and the latent dimensionality can be later freely adjusted depending on the downstream application. More specifically, the PCA encoder and decoder learned for a dimensionality of can be universally used for any lower-dimensional PCA of latent size without any modification of the PCA model but simply dropping the least-principal components () in , i.e., nullifying the tail variables as for all .

The rateless property is greatly beneficial in practical applications since the optimal latent dimensionality is often not known beforehand. Instead of training multiple encoder and decoder pairs for different compression rates, one common PCA model can cover all rates for by simply dropping trailing components, while still attaining good performance as given by . For example, a medical institute could release a massively high-dimensional magnetic resonance imaging (MRI) dataset alongside a trained PCA model with a reduced-dimensionality of targeted for a specific diagnostic application. However, for under various other applications (e.g., different analysis or diagnostic contexts), an even further reduced dimensionality may suffice and/or improve learning performance for the ultimate task. Even for end-users that require fewer latent variables in various applications, the excellent rate-distortion tradeoff (under Gaussian data assumptions) is still achieved, without updating the PCA model, by simply discarding the least-principal components.

Nevertheless, traditional PCA often underperforms in comparison to nonlinear dimensionality reduction techniques on real-world datasets. Exploiting nonlinear activation functions such as rectified linear unit (ReLU), AEs can better learn inherent nonlinearities of the latent representations underlying the data. However, existing AEs do not readily achieve the rateless property, because the latent variables are generally learned to be equally important. Hence, multiple AEs would need to be trained and deployed for different target dimensionalities. This drawback still holds for the progressive dimensionality reduction approaches employed by stacked AEs 

Hinton:2006 and hierarchical AEs Schloz:2008 , those of which require multiple training and re-tuning for different dimensionality. In this paper, we propose a simple and effective technique of employing a stochastic bottleneck to realize rateless AEs that are adaptable to any compression rates.

2.3 StochasticWidth bottleneck

Several variants of AE have been proposed, e.g., sparse AE (SAE) Ng:2011 , variational AE (VAE) Vincent:2010 ; Doersch:2016 ; Sonderby:2016 , rate-distortion AE Giraldo:2013 , and compressive AE Theis:2017 . We introduce a new AE family which has no fixed bottleneck architecture to realize the rateless property for seamless dimensionality reduction. Our method can be viewed as an extended version of SAE, similar in its over-complete architecture, but also employing a varying dropout distribution across the width of the network. This aspect of our approach is key for achieving good reconstruction performance while allowing a flexibly varying compression rate for the dimensionality reduction.

Unlike a conventional AE with a deterministic bottleneck architecture, as shown in Fig. 1(a), the SAE employs a probabilistic bottleneck with an effective dimensionality that is stochastically reduced by dropout, as depicted in Fig. 1(b). For example, the SAE encoder generates -dimensional variables

which are randomly dropped out at a probability of

, resulting in an effective latent dimensionality of . Although the SAE has better adaptability than deterministic AE to further dimensionality reduction by dropping latent variables, the latent variables are still trained to be equally important for reconstruction of the data, and thus it is limited in achieving flexible ratelessness.

Our AE employs a stochastic bottleneck that imposes a specific dropout rate distribution that varies across both the width and depth of the network, as shown in Fig. 1(c). In particular, our StochasticWidth technique employs a monotonically increasing dropout rate from the head (upper) latent variable nodes to the tail (lower) nodes in order to encourage the latent variables to be ordered by importance, in a manner analogous to PCA. By concentrating more important features in the head nodes, we hope to enable adequate data reconstruction even when some of the least important dimensions (analogous to least-principal components) are later discarded.

This non-uniform dropout rate may also offer another benefit for gradient optimization. For existing AEs, the distortion is invariant against node permutations with permuted weights and bias in neural networks, which implies that there are a large number of global solutions minimizing the loss function. A plurality of solutions may distract the stochastic gradient, while non-uniform dropout rates can give a particular priority at every node that prevents permutation ambiguity.

2.4 TailDrop regularization

Dropout Hinton:2012 ; Srivastava:2014

has been widely used to regularize over-parameterized deep neural networks. The role of dropout is to improve generalization performance by preventing activations from becoming strongly correlated, which in turn leads to over-training. In the standard dropout implementation, network activations are discarded (by zeroing the activation for that neuron node) during training (and testing for some cases) with independent probability

. A recent theory Gal:2016

provides a viable interpretation of dropout as a Bayesian inference approximation.

There are many related regularization methods proposed in literature; e.g., DropConnect Wan:2013 , DropBlock Wu:2018 , StochasticDepth Huang:2016 , DropPath Larsson:2016 , ShakeDrop Yamada:2018 , SpatialDrop Tompson:2015 , ZoneOut Krueger:2016 , Shake-Shake regularization Gastaldi:2017 , and data-driven drop Huang:2017 . In order to facilitate the rateless property for stochastic bottleneck AE architectures, we introduce an additional regularization mechanism referred to as TailDrop, as one realization of StochasticWidth.

Figure 2: Non-uniform dropout regularization: (a) StochasticDepth Huang:2016 to control depth by prioritizing shallower layers, (b) StochasticWidth to control width by prioritizing head neurons with independent and increasing dropout, (c) StochasticWidth to nullify consecutive burst neurons by TailDrop, (d) example of tail drop distributions.

The stochastic bottleneck uses non-uniform dropout to adjust the importance of each neuron as explained in Fig. 1(c). This regularization technique is related to StochasticDepth Huang:2016 used in deep residual networks. As illustrated in Fig. 2(a), StochasticDepth drops out entire layers at a higher chance when dropping deeper layers so that an effective network depth is constrained and shallower layers are dominantly trained. Analogously, non-uniform dropouts are carried out across the width direction for StochasticWidth as shown in Fig. 2(b), where independent dropouts at increasing rates are used for each neuron. The monotonically increasing dropout rates can be also realized by dropping consecutive nodes at the tail as shown in Fig. 2

(c), which we call TailDrop. For TailDrop, the desired dropout rates can be achieved by adjusting the probability distribution of the tail drop length as depicted in Fig. 

2(d). Considering the scenarios that the user would discard the least-principal latent variables to adjust dimensionality later, we focus on the use of this TailDrop regularization for rateless AE in this paper.

2.5 Multi-objective learning

Finding an appropriate dropout probability distribution is a key consideration in the design of high-performance rateless AEs. We now give offer insights on how to do so, however a rigorous theoretical development remains an open problem for future study. The objective function in (1) should be re-formulated to realize the rateless property. Our ultimate goal is to find AE model parameters and that simultaneously minimize distortion across multiple rates. Specifically, this problem is an -ary multi-objective optimization as follows:

(3)

where denotes the expected distortion for the candidate AE model parameterized by and , given that the -dimensional latent variables are further reduced to -dimensional variables by dropping the last variables. In this multi-objective problem, optimizing an AE to minimize one component of the loss objective, i.e., for a particular dimensionality , generally does not yield the optimal model for other dimensionalities . Hence, a rateless AE model must account for the best balance across multiple dimensionalities in order to approach the Pareto-front solutions.

One commonly used naïve method in multi-objective optimization is a weighted sum optimization to reduce the problem to a single objective function as follows:

(4)

with some weights . One may choose the weights to scale the distortion to a similar amplitude as for positive distortions where denotes the ground solution. As the expected distortion may depend on the eigenvalues as shown in (2), understanding the nonlinear eigenspectrum can facilitate in optimizing the weight distributions. The stochastic TailDrop regularization at training phase can be interpreted as a weight since the conventional single-objective optimization in (1) will effectively become the weighted sum optimization in (4). Accordingly, the weights will be the survivor length probability, i.e., the TailDrop distribution is .

Besides the weighted sum approach, there are several improved methods in multi-objective optimizations such as the weighted metric method. We leave such an optimization framework for future work. In this paper, we consider parametric eigenspectrum assumptions for simplicity. Under a model-based approach of nonlinear eigenspectrum assumptions, we evaluated several parametric distributions for TailDrop probability, e.g., Poisson, Laplacian, exponential, sigmoid, Lorentzian, polynomial, and Wigner distributions, some of which are depicted in Fig. 2

(d). Through a preliminary experiment, it was found that the power cumulative distribution function

for an order of ( denotes a compression rate) performed well for most cases. Accordingly, we focus on the use of the power distribution for TailDrop in the experiments below.

3 Experiments

To demonstrate the principle-of-concept benefits of our rateless AEs, we use standard image datasets of MNIST and CIFAR-10 Krizhevsky:2009 . MNIST contains handwritten -class gray-scale images of size -by-, and thus the raw data dimensionality is . The dataset has and images for training and testing, respectively. CIFAR-10 is a dataset of -by- color images, representing classes of natural scene objects. The raw data dimensionality is thus . The training set and test set contain and images, respectively.

The AE models were implemented using the Chainer framework Tokui:2015

. For simplicity, we use fully-connected three-layer perceptron with ReLU activation functions for both encoder and decoder networks. Note that the concept of StochasticWidth regularization to realize ordered-principal feature can be applied to recurrent and convolutional networks in a straightforward manner. The number of nodes in the hidden layers is

for MNIST and for CIFAR-10. For conventional SAE, we used

sparsity as a baseline to evaluate the robustness of flexible latent dimensionality. Model training was performed using the adaptive momentum (Adam) stochastic gradient descent method 

Kingma:2014 with a learning rate of , and a mini-batch size of

. The maximum number of epochs is

while early stopping with a patience of was applied.

3.1 MSE measure

(a) MNIST
(b) CIFAR-10
Figure 3: MSE performance of SAE and RL-AE as a function of survivor latent dimensionality .

Figs. 3(a) and (b) show the MSE performance of the conventional SAE and proposed RL-AE for MNIST and CIFAR-10 datasets, respectively. For conventional SAE, multiple AE models are trained at the intended latent dimensionality of for . The rateless AE is optimized at the dimensionality of using TailDrop with a power distribution (with for MNIST, and for CIFAR-10). The parameter was chosen from a finite set between and to achieve a good rate-distortion tradeoff. The latent dimensionality used for image reconstruction is varied during testing evaluation by deterministically dropping tail variables.

As shown in Fig. 3(a), the conventional AE does not adapt well to variable dimensionality, with the MSE performance drastically degrading when the testing dimensionality is reduced from the intended dimensionality . For the SAE model trained for , dropping of the latent variables to yield a reduced dimensionality of , the MSE degrades to  dB from the  dB obtained at , which is significantly worse than an SAE model trained for that obtains an MSE of  dB. This shows that the existing SAEs cannot be universally reused for flexible dimensionality reduction, and hence adaptive switching between multiple trained SAE models would be required depending on the desired dimensionality. However, our proposed RL-AE, which is trained once for dimensionality , flexibly operates over the wide range of further reduced dimensionalities , while achieving low MSE distortion close to the ideal MSEs obtained by SAE models trained for the specific dimensionality .

Similar observations can be made in the results for the CIFAR-10 dataset, as shown in Fig. 3(b). It confirms that high performance can be achieved by a single AE model for different compression rates by using the stochastic bottleneck regularization. This benefit comes from non-uniform dropout rates across neurons to concentrate the most-principal feature in upper nodes. Conventional uniform-rate dropout, as used in existing SAEs, still requires the target dimensionality to be known during training.

It should be noted that the linear PCA dimensionality reduction performs surprisingly well, competitive to the proposed nonlinear AE for CIFAR-10 datasets in Fig. 3(b). Because MNIST images are nearly binary bitmaps whose statistics are far from the Gaussian distribution, PCA did not work well as shown in Fig. 3(a). However, most natural images such as CIFAR-10 are often well-modeled by the Gauss–Markov process. This may be the primary reason why PCA works sufficiently well in particular for the MSE metric. Although it was unexpected that the nonlinear AEs could not improve the MSE performance over the linear PCA for CIFAR-10 datasets, the MSE curve of our AE perfectly agreed that of PCA for , which implies that our stochastic bottleneck approach could learn the ordered-principal components as intended.

3.2 SSIM measure

Here, we verify that the advantage of our rateless AEs extends beyond the MSE distortion criterion. Since the classical MSE metric is known to be inconsistent with perceptual image quality, the structural similarity (SSIM) index Wang:2004 has been recently used as an alternative measure of perceptual distortion. The SSIM index ranges from to , indicating perceptual similarity between the original and distorted images, from the worst to best quality, respectively. We use a negative SSIM index as a new loss function to fine-tune the AE models, which were pre-trained for the MSE metric, so as to improve the perceptual image quality.

(a) MNIST
(b) CIFAR-10
Figure 4: SSIM performance of SAE and RL-AE as a function of survivor latent dimensionality .

Figs. 4(a) and (b) plot the negative SSIM index of the reconstructed images by the conventional SAE and proposed RL-AE for MNIST and CIFAR-10 datasets, respectively. It is confirmed in those figures that the conventional SAE cannot be universally used for flexibly varying dimensionality in the SSIM distortion metric. Although the proposed RL-AE may perform worse than the conventional SAEs at some dimensionalities, for which the SAE models were dedicatedly optimized, our RL-AE flexibly achieves SSIM performance closely comparable to the best SSIMs obtained by the ensemble of SAEs over the wide range of dimensionalities .

We can also see that the traditional PCA has a higher loss in the perceptual SSIM metric compared to the MSE metric. In particular for MNIST in Fig. 4(a), the SSIM degradation of the PCA over our RL-AE is noticeable over the whole range of dimensionalities, while the PCA worked well for lower dimensionality for the MSE metric, as seen in Fig. 3(a). More importantly, our AE can offer a perceptual performance benefit in the SSIM metric over PCA even for CIFAR-10 datasets, for which the AEs could not outperform the PCA in the MSE metric as discussed in Fig. 3(b). This makes sense because the PCA does not consider any perceptual quality but only the signal energy relevant for the MSE measure.

3.3 Reconstructed images

Figs. 5(a) and (b) show visual samples randomly chosen from MNIST test datasets, respectively for SAE and RL-AE reconstructions. The top row displays the original MNIST images, and the subsequent rows are reconstructed images for a reduced dimensionality of . Both types of models are trained at a latent dimensionality of under the MSE measure. Our proposed RL-AE clearly exhibits improved visual quality for flexible dimensionality reduction versus the conventional SAE, without requiring retraining for each reduced dimensionality. Similar results can be seen for CIFAR-10 in Figs. 6(a) and (b).

Tables 1 and 2 show the corresponding averaged MSE and SSIM index performance at for MNIST and CIFAR-10, respectively. Here, we also present the

-label classification accuracy when a classical support vector machine (SVM) is applied to the reduced-dimension latent variables. Besides the higher image quality, we also observe higher classification accuracy achieved by the proposed rateless AE across the variable dimensionality.

(a) Conventional Sparse AE
(b) Proposed Rateless AE
Figure 5: MNIST reconstruction snapshots varying the survivor latent dimensionality using AE model designed at dimensionality of . The top row is the original image, and subsequent rows are reconstructed images for a reduced dimensionality of .
(a) Conventional Sparse AE
(b) Proposed Rateless AE
Figure 6: CIFAR-10 reconstruction snapshots varying the survivor latent dimensionality using AE model designed at dimensionality of . The top row is the original image, and subsequent rows are reconstructed images for a reduced dimensionality of .
Dimensionality
MSE (dB) Conv. AE
Prop. AE
SSIM Index Conv. AE
Prop. AE
SVM Acc. Conv. AE
Prop. AE
Table 1: MSE, SSIM, and SVM classification accuracy of SAE and RL-AE, optimized under MSE measure at dimensionality of for MNIST datasets
Dimensionality
MSE (dB) Conv. AE
Prop. AE
SSIM Index Conv. AE
Prop. AE
SVM Acc. Conv. AE
Prop. AE
Table 2: MSE, SSIM, and SVM classification accuracy of SAE and RL-AE, optimized under MSE measure at dimensionality of for CIFAR-10 datasets

3.4 Latent representation

(a) Conventional Sparse AE
(b) Proposed Rateless AE
Figure 7: The first two latent variables of MNIST test images encoded by SAE and RL-AE, which are trained at a latent dimensionality of in terms of MSE measure.

Finally we show a latent space geometry in Figs. 7(a) and (b), where the first two latent variables of all MNIST test images are plotted for the traditional SAE and proposed RL-AE, respectively. One can clearly see that the label-dependent distribution in our RL-AE is more clearly observable than the conventional AE, since the most-principal latent components are properly associated with the upper latent variables via the proposed stochastic bottleneck technique. This observation is expected from the higher SVM accuracy performance in Table 1.

4 Conclusions

We proposed new a type of auto-encoders employing a form of stochastic bottlenecking with non-uniform dropout rates for flexible dimensionality reduction. The proposed auto-encoders are rateless, i.e., the compression rate in dimensionality reduction is not pre-determined at the training phase and the user can freely change the dimensionality at testing phase without severely degrading quality. To realize rateless AEs, a simple regularization method called TailDrop was introduced to impose higher priority at upper neurons for learning the most-principal nonlinear features. This paper showed proof-of-concept results based on the standard MNIST and CIFAR-10 image datasets. Universally good distortion performance was obtained with a single AE model irrespective of the flexible dimensionality reduction rate, which was obtained by simply dropping the least-principal latent dimensions. More rigorous analysis and theoretical optimization of dropout rate distributions for real-world data are left for future work. Multi-objective learning to account for various downstream applications is also an important open question to pursue.

References

5 Supplementary Experiments

We show the MSE performance of the proposed RL-AE for different datasets as follows:

  • Fashion-MNIST (FMNIST) Xiao:2017 is a set of fashion articles represented by gray-scale -by- images, associated with a label from classes, consisting a training set of examples and a test set of examples. FMNIST was intended to serve as a direct replacement for the MNIST dataset for benchmarking.

  • Kuzushiji-MNIST (KMNIST) Clanuwat:2018 is another set of hand-written Japanese characters represented by -class gray-scale -by- images with the same data sizes of MNIST and FMNIST.

  • The street view house numbers (SVHN) dataset Netzer:2011 is similar to MNIST but composed of cropped -by- color images of house numbers. It contains digits for training and digits for testing.

  • CIFAR-100 Krizhevsky:2009 is a set of small natural images, just like the CIFAR-10, except it has classes containing images each. There are training images and testing images per class. The classes in the CIFAR-100 are grouped into superclasses.

Figs. 8(a) through (d) show the MSE performance as a function of survivor latent dimensionality for FMNIST, KMNIST, SVHN, and CIFAR-100, respectively. We can confirm that the proposed AE achieves graceful performance over the wide range of dimensionality, competitive to the best performance which the conventional AEs can offer at a pre-determined dimensionality. Although the linear PCA also achieves rateless performance, a significant MSE loss is seen for gray-scale datasets of FMNIST and KMNIST, similar to MNIST in Fig. 3(a). However for color datasets, PCA performed well just like in CIFAR-10 in Fig. 3(b). Nonetheless, our AE achieves nearly best performance, outperforming the conventional AE. In addition, our AE may achieve better perceptual quality and classification accuracy as discussed for CIFAR-10. The experimental results verified that a simple mechanism with non-uniform dropout regularization can enable a reasonable rateless property.

Figs. 9, 10, 11, and 12 show visual snapshots of randomly-chosen images reconstructed by the conventional AE and proposed AE for FMNIST, KMNIST, SVHN, and CIFAR-100, respectively. One can observe a clear advantage of the RL-AE over the SAE to maintain higher quality across variable dimensionality.

(a) FMNIST
(b) KMNIST
(c) SVHN
(d) CIFAR-100
Figure 8: MSE performance of SAE and RL-AE as a function of survivor latent dimensionality .
(a) Conventional Sparse AE
(b) Proposed Rateless AE
Figure 9: FMNIST reconstruction snapshots varying the survivor latent dimensionality using AE model designed at dimensionality of . The top row is the original image, and subsequent rows are reconstructed images for a reduced dimensionality of .
(a) Conventional Sparse AE
(b) Proposed Rateless AE
Figure 10: KMNIST reconstruction snapshots varying the survivor latent dimensionality using AE model designed at dimensionality of . The top row is the original image, and subsequent rows are reconstructed images for a reduced dimensionality of .
(a) Conventional Sparse AE
(b) Proposed Rateless AE
Figure 11: SVHN reconstruction snapshots varying the survivor latent dimensionality using AE model designed at dimensionality of . The top row is the original image, and subsequent rows are reconstructed images for a reduced dimensionality of .
(a) Conventional Sparse AE
(b) Proposed Rateless AE
Figure 12: CIFAR-100 reconstruction snapshots varying the survivor latent dimensionality using AE model designed at dimensionality of . The top row is the original image, and subsequent rows are reconstructed images for a reduced dimensionality of .