Function space analysis of deep learning representation layers

10/09/2017 ∙ by Oren Elisha, et al. ∙ 0

In this paper we propose a function space approach to Representation Learning and the analysis of the representation layers in deep learning architectures. We show how to compute a weak-type Besov smoothness index that quantifies the geometry of the clustering in the feature space. This approach was already applied successfully to improve the performance of machine learning algorithms such as the Random Forest and tree-based Gradient Boosting. Our experiments demonstrate that in well-known and well-performing trained networks, the Besov smoothness of the training set, measured in the corresponding hidden layer feature map representation, increases from layer to layer. We also contribute to the understanding of generalization by showing how the Besov smoothness of the representations, decreases as we add more mis-labeling to the training data. We hope this approach will contribute to the de-mystification of some aspects of deep learning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An excellent starting point for this paper is the survey on Representation Learning [3]

. One of the main issues raised in this survey is that simple smoothness assumptions on the data do not hold. That is, there exists a curse of dimensionality and ‘close’ feature representations do not map to ‘similar’ values. The authors write: “We advocate learning algorithms that are flexible and non-parametric but do not rely exclusively on the smoothness assumption”.

In this work we do in fact advocate smoothness analysis of representation layers, yet in line with [3], our notion of smoothness is indeed flexible, adaptive and non-parametric. We rely on geometric multivariate function space theory and use a machinery of Besov ‘weak-type’ smoothness which is robust enough to support quantifying smoothness of high dimensional discontinuous functions.

Although machine learning is mostly associated with the field of statistics we argue that the popular machine learning algorithms such as Support Vector Machines, Tree-Based Gradient Boosting and Random Forest (see e.g

[28]), are in fact closely related to the field of multivariate adaptive approximation theory. In essence, these algorithms work best if there exists geometric structure of clusters in the feature space. If such geometry exists, these algorithms will capture it, by segmenting out the different clusters. We claim that in the absence of such geometry, these machine learning algorithms will fail.

However, this is exactly where Deep Learning (DL) comes into play. In the absence of a geometrical structure in the given initial representation space, the goal of the DL layers is to create a series of transformations from one representation space to the next, where structure of the geometry of the clusters improves sequentially. We quote [10]

: “The whole process of applying this complex geometric transformation to the input data can be visualized in 3D by imagining a person trying to uncrumple a paper ball: the crumpled paper ball is the manifold of the input data that the model starts with. Each movement operated by the person on the paper ball is similar to a simple geometric transformation operated by one layer. The full uncrumpling gesture sequence is the complex transformation of the entire model. Deep learning models are mathematical machines for uncrumpling complicated manifolds of high-dimensional data”.

Let us provide an instructive example: Assume we are presented with a set of gray-scale images of dimension with

class labels. Assume further that a DL network has been successfully trained to classify these images with relatively high precision. This allows us to extract the representation of each image in each of the hidden layers. To create a representation at layer

, we concatenate the rows of pixel values of each image, to create a vector of dimension . We also normalize the pixel values to the range . Since we advocate a function-theoretical approach, we transform the class labels into vector-values in the space by assigning each label to a vertex of a standard simplex (see Section 2 below). Thus, the images are considered as samples of a function . In the general case, there is no hope that there exists geometric clustering of the classes in this initial feature space and that has sufficient ‘weak-type’ smoothness (as is verified by our experiments below). Thus, a transform into a different feature space is needed. We thus associate with each -th layer of a DL network, a function where the samples are vectors created by normalizing and concatenating the feature maps computed from each of the images. Interestingly enough, although the series of functions are embedded in different dimensions , through the simple normalizing of the features, our method is able to assign smoothness indices to each layer that are comparable. We claim that for well performing networks, the representations in general ‘improve’ from layer to layer and that our method captures this phenomena and shows the increase of smooothness.

Related work is [38]

where an architecture of Convolutional Sparse Coding was analyzed. The connection to this work is the emphasis on ‘sparsity’ analysis of hidden layers. However, there are significant differences since we advocate a function space theoretical analysis of any neural network architecture in current use. Also, there is the recent work

[42]

, where the authors take an ‘information-theoretical’ approach to the analysis of the stochastic gradient descent optimization of DL networks and the representation layers. One can safely say that all of the approaches, including the one presented here, need to be further evaluated on larger datasets and deeper architectures.

The paper is organized as follows: In Section 2 we review our smoothness analysis machinery which is the Wavelet Decomposition of Random Forest (RF) [22]. In Section 3 we present the required geometric function space theoretical background. Since we are comparing different representations over different spaces of different dimensions, we add to the theory presented in [22] relevant ‘dimension-free’ results. In Section 4 we show how to apply the theory in practice. Specifically, how the wavelet decomposition of a RF can be used to numerically compute a Besov ‘weak-type’ smoothness index of a given function in any representation space (e.g hidden layer). Section 5 provides experimental results that demonstrate how our theory is able to explain empirical findings in various scenarios. Finally, Section 6 presents our conclusions as well as future work.

2 Wavelet decomposition of Random Forests

To measure smoothness of a dataset at the various DL representation layers, we apply the construction of wavelet decompositions of Random Forests [22]. Wavelets [13], [34] and geometric wavelets [15], [1], are a powerful yet simple tool for constructing sparse representations of ‘complex’ functions. The Random Forest (RF) [4], [11], [28] introduced by Breiman [5], [6]

, is a very effective machine learning method that can be considered as a way to overcome the ‘greedy’ nature and high variance of a single decision tree. When combined, the wavelet decomposition of the RF unravels the sparsity of the underlying function and establishes an order of the RF nodes from ‘important’ components to ‘negligible’ noise. Therefore, the method provides a better understanding of any constructed RF. Furthermore, the method is a basis for a robust feature importance algorithm. We note that one can apply a similar approach to improve the performance of tree-based Gradient Boosting algorithms

[14].

We begin with an overview of single trees. In statistics and machine learning [7], [2], [4], [16], [28] the construction is called a Decision Tree or the Classification and Regression Tree (CART). We are given a real-valued function or a discrete dataset , in some convex bounded domain . The goal is to find an efficient representation of the underlying function, overcoming the complexity, geometry and possibly non-smooth nature of the function values. To this end, we subdivide the initial domain into two subdomains, e.g. by intersecting it with a hyper-plane. The subdivision is performed to minimize a given cost function. This subdivision process then continues recursively on the subdomains until some stopping criterion is met, which in turn, determines the leaves of the tree. We now describe one instance of the cost function which is related to minimizing variance. At each stage of the subdivision process, at a certain node of the tree, the algorithm finds, for the convex domain associated with the node:

(i) A partition by an hyper-plane into two convex subdomains , ,

(ii) Two multivariate polynomials , of fixed (typically low) total degree r 1.

The partition and the polynomials are chosen to minimize the following quantity

(2.1)

Here, for , we used the definition

If the dataset is discrete, consisting of feature vectors , with response values , then a discrete functional is minimized over all partitions

(2.2)

Observe that for any given subdividing hyperplane, the approximating polynomials in (

2.2) can be uniquely determined for , by least square minimization. For the order , the approximating polynomials are nothing but the mean of the function values over each of the subdomains

In many applications of decision trees, the high-dimensionality of the data does not allow to search through all possible subdivisions. As in our experimental results, one may restrict the subdivisions to the class of hyperplanes aligned with the main axes. In contrast, there are cases where one would like to consider more advanced form of subdivisions, where they take certain hyper-surface form or even non-linear forms through kernel Support Vector Machines. Our paradigm of wavelet decompositions can support in principle all of these forms.

Random Forest (RF) is a popular machine learning tool that collects decision trees into an ensemble model [5],[4]. The trees are constructed independently in a diverse fashion and prediction is done by a voting mechanism among all trees. A key element [5], is that large diversity between the trees reduces the ensemble’s variance. There are many RFs variations that differ in the way randomness is injected into the model, e.g bagging, random feature subset selection and the partition criterion [11], [28]. Our wavelet decomposition paradigm is applicable to most of the RF versions known from the literature.

Bagging [6] is a method that produces partial replicates of the training data for each tree. A typical approach is to randomly select for each tree a certain percentage of the training set (e.g. 80%) or to randomly select samples with repetitions [28].

Additional methods to inject randomness can be achieved at the node partitioning level. For each node, we may restrict the partition criteria to a small random subset of the parameter values (hyper-parameter). A typical selection is to search for a partition from a random subset of   features [5]

. This technique is also useful for reducing the amount of computations when searching the appropriate partition for each node. Bagging and random feature selections are not mutually exclusive and could be used together.

For , one creates a decision tree , based on a subset of the data, . One then provides a weight (score) to the tree

, based on the estimated performance of the tree, where

. In the supervised learning, one typically uses the remaining data points

to evaluate the performance of . For simplicity, we will mostly consider in this paper the choice of uniform weights . For any point , the approximation associated with the tree, denoted by , is computed by finding the leaf in which is contained and then evaluating , where is the corresponding polynomial associated with the decision node . One then assigns an approximate value to any point by

Typically, in classification problems, the response variable does have a numeric value, but is labeled by one of L classes. In this scenario, each input training point

is assigned with a class . To convert the problem to the ‘functional’ setting described above one assigns to each class the value of a node on the regular simplex consisting of vertices in (all with equal pairwise distances). Thus, we may assume that the input data is in the form

In this case, if we choose approximation using constants (, then the calculated mean over any subdomain is in fact a point , inside the simplex. Obviously, any value inside the multidimensional simplex, can be mapped back to a class, along with an estimated confidence level, by calculating the closest vertex of the simplex to it.

Following the classic paradigm of nonlinear approximation using wavelets [13],[17],[34] and the geometric function space theory presented in [15], [30], we introduced in [22] a construction of a wavelet decomposition of a forest. Let be a child of in a tree , i.e. and was created by a partition of . Denote by , the indicator function over the child domain , i.e. , if and , if . We use the polynomial approximations , computed by the local minimization (2.1) and define

(2.3)

as the geometric wavelet associated with the subdomain and the function , or the given discrete dataset . Each wavelet , is a ‘local difference’ component that belongs to the detail space between two levels in the tree, a ‘low resolution’ level associated with and a ‘high resolution’ level associated with . Also, the wavelets (2.3

) have the ‘zero moments’ property, i.e., if the response variable is sampled from a polynomial of degree

over , then our local scheme will compute , , and therefore .

Under certain mild conditions on the tree and the function , we have by the nature of the wavelets, the ‘telescopic’ sum of differences

(2.4)

For example, (2.4) holds in -sense, , if and for any and series of domains , each on a level , with , we have that .

The norm of a wavelet is computed by

For the case , where and this simplifies to

(2.5)

where denotes the volume of . Observe that for , the subdivision process for partitioning a node by minimizing (2.1) is equivalent to maximizing the sum of squared norms of the wavelets that are formed in that partition (see [22]).

Recall that our approach is to convert classification problems into a ‘functional’ setting by assigning the class labels to vertices of a simplex in . In such cases of multi-valued functions, choosing , the wavelet is

and its norm is given by

(2.6)

where for ,.

Using any given weights assigned to the trees, we obtain a wavelet representation of the entire RF

(2.7)

The theory (see [22]) tells us that sparse approximation is achieved by ordering the wavelet components based on their norm

(2.8)

with the notation . Thus, the adaptive M-term approximation of a RF is

(2.9)

Observe that, contrary to most existing tree pruning techniques, where each tree is pruned separately, the above approximation process applies a ‘global’ pruning strategy where the significant components can come from any node of any of the trees at any level. For simplicity, one could choose , and obtain

(2.10)

Fig. 1 depicts an M-term (2.10) selected from an RF ensemble. The red colored nodes illustrate the selection of the M wavelets with the highest norm values from the entire forest. Observe that they can be selected from any tree at any level, with no connectivity restrictions.

Fig. 1: Selection of an M-term approximation from the entire forest.

Figure 2 depicts how the parameter is selected for the challenging “Red Wine Quality” dataset from the UCI repository [45]. The generation of 10 decision trees on the training set creates approximately 3500 wavelets. The parameter M is then selected by minimization of the approximation error on an OOB validation set. In contrast with other pruning methods [32], using (2.8), the wavelet approximation method may select significant components from any tree and any level in the forest. By this method, one does not need to predetermine the maximal depth of the trees and over-fitting is controlled by the selection of significant wavelet components.

Fig. 2: “Red Wine Quality” dataset - Numeric computation of M for optimal regression.

3 Geometric multivariate function space theory

An important research area of approximation theory, pioneered by Pencho Petrushev, is the characterization of adaptive geometric approximation algorithms by generalizations of the classic ‘isotropic’ Besov space to more ‘geometric’ Besov-type spaces [12], [15], [30]. We first review the definition and results of [22]. In essence, this is a generalization of a theoretical framework that has been successfully applied in the context of low dimensional and structured signal processing [17], [20]. However, in the context of machine learning, we need to analyze unstructured and possibly high dimensional datasets.

Approximation Theory relates the sparsity of a function to its Besov smoothness index and supports cases where the function is not even continuous. For a function , , and , we recall the -th order difference operator

where we assume the segment is contained in . Otherwise, we set the . The modulus of smoothness of order is defined by

where for , denotes the norm of . We also denote

(3.1)

Next, we define the ‘weak-type’ Besov smoothness of a function, subject to the geometry of a single (possibly adaptive) tree

Definition 3.1

For and , we set , to be . For a given function , , and tree , we define the associated B-space smoothness in , , by

(3.2)

where, denotes the volume of .

This notion of smoothness allows to handle functions that are not even continuous. The higher the index for which (3.2) is finite, the smoother the function is. This generalizes the Sobolev smoothness of differentiable functions that have their partial derivatives integrable in some space. Also, the above definition generalizes the classical function space theory of Besov spaces, where the tree partitions are non-adaptive. In fact, classical Besov spaces are a special case, where the tree is constructed by partitioning into dyadic cubes, each time using levels of the tree. We recall that a ‘well clustered’ function is in fact infinitely smooth in the right adaptively chosen Besov space.

Lemma 3.2

Let , where each is a box with sides parallel to the main axes and . We further assume that , whenever . Then, there exists an adaptive tree partition , such that , for any .

Proof:

See [22].

For a given forest and weights , the - Besov semi-norm associated with the forest is

(3.3)
Definition 3.3

Given a (possibly adaptive) forest representation, we define the Besov smoothness index of by the maximal index for which (3.3) is finite.

Remark It is known that different geometric approximation schemes are characterized by different flavors of Besov-type smoothness. In this work, for example, all of our experimental results compute smoothness of representations using partitions along the main axes. This restriction may lead, in general, to potentially lower Besov smoothness of the underlying function and lower sparsity of the wavelet representation. Yet, the theoretical definitions and results of this paper can also apply to more generalized schemes where, for example, tree partitions are performed using arbitrary hyper-planes. In such a case, the smoothness index of a given function may increase.

Next, for a given tree and parameter , we denote the -sparsity of the tree by

(3.4)

Let us further denote the -sparsity of a forest , by

In the setting of a single tree constructed to represent a real-valued function and under mild conditions on the partitions (see remark after (2.4) and condition (3.7)) , the theory of [15] proves the equivalence

(3.5)

This implies that there are constants , that depend on parameters such as and in condition (3.7) below, such that

Therefore, we also have for the forest model

(3.6)

In the setting in which we wish to apply our function theoretical approach, we are comparing smoothness of representation over different layers of DL networks. This implies that we are analyzing and comparing the smoothness a set of functions , each over a different representation space of a different dimension . This is, in some sense, non-standard in function space theory, where the space, or at least the dimension, over which the functions have their domain is typically fixed. Specifically, observe that the equivalence (3.6) depends on the dimension of the feature space. To this end, we add to the theory ‘dimension-free’ analysis for the case .

We begin with a Jackson-type estimate for the degree of the adaptive wavelet forest approximation, which we keep ‘dimension free’ for the case .

Theorem 3.4

Let be a forest. Assume there exists a constant , such that for any domain on a level and any domain , on the level , with , we have

(3.7)

where denotes the volume of . For any , denote formally , and assume that , where

Then, for the -term approximation (2.9) we have for

(3.8)

and for

(3.9)
Proof:

The proof in [22] shows (3.9). To see (3.8) we observe that the dimension comes into play in the Nikolskii-type estimate for bounded convex domains , and

However, for the special case of this actually simplifies to

Using the equivalence (3.6), we get for any

which is not a ‘dimension-free’ Jackson estimate, as the one will show below for (see (3.11)). Next, we present a simple invariance property of the smoothness analysis under higher dimension embedding.

Lemma 3.5

Let , , with values , . Let be a forest approximation of the data. For any , let be defined by , . Let us further define . Next, denote by a forest defined over which is the natural extension of , using the same trees with same partitions over the first dimensions. Then, for and any , .

Proof:

Let be the domains of the trees of , with wavelets of the type

Recall that is the norm of the sequence of the wavelet norms given by (2.6).

Now, for each domain and the corresponding domain , the normalization of the feature space into and the higher dimensional embedding in ensures that

Since the vector means remain unchanged under the higher dimensional embedding, we have

This gives .

Next, to allow our smoothness analysis to be ‘dimension free’ we modify the modulus of smoothness (3.1) for and use the following form of ‘averaged modulus’

Definition 3.6

For a function we define

(3.10)

where is the average of over .

It is well known that averaged forms of the modulus are equivalent to the form (3.1), but with constants that depend on the dimension. However, replacing (3.1) with (3.10) allows us to produce ‘dimension-free’ analysis. We use (3.10) to define

We can now show

Theorem 3.7

Let . Then the following equivalence holds for the case ,

where , and the constants of equivalence depend on , but not .

Proof:

See the Appendix

This equivalence together with (3.8) imply that for we do have a ‘dimension-free’ Jackson estimate

(3.11)

4 Smoothness analysis of the representation layers in deep learning networks

We now explain how the theory presented in Section 3 is used to estimate the ‘weak-type’ smoothness of a given function in a given representation layer. Recall from the introduction that we create a representation of images at layer by concatenating the rows of pixel values of each grayscale image, to create a vector of dimension (or for a color image). We also normalize the pixel values to the range . We then transform the class labels into vector-values in the space by assigning each label to a vertex of a standard simplex (see Section 2). Thus, the images are considered as samples of a function . In the same manner, we associate with each -th layer of a DL network, a function , where

is the number of features/neurons at the

-th layer. The samples of are obtained by applying the network on the original images up the given -th layer. For example, in a convolution layer, we capture the representations after the cycle of convolution, non-linearity and pooling. We then extract vectors created by normalizing and concatenating the feature map values corresponding to the images. Recall that although the functions are embedded in different dimensions , through the simple normalizing of the features, our method is able to assign smoothness indices to each layer that are comparable.

Next we describe how we estimate the smoothness of each function . To this end, we have made several improvements and simplifications to the method of [22]. We compute a RF over the samples of with the choice and then apply the wavelet decomposition of the RF (see Section 2). For each and one computes the discrete error of the wavelet -term approximation for the case

(4.1)

We then use the theoretical estimate (3.11) and numeric estimate of in (4.1) to model the error function by for unknown . Notice that the constant absorbs the terms relating to the absolute constant, the number of trees in RF model as well as the Besov-norm. Numerically, one simply models , , and then solves through least squares for . Finally, we set as our estimate for the ‘critical’ Besov smoothness index of .

Remarks:

(i) Observe that for the fit of and , we only use significant terms, so as to avoid fitting the ‘noisy’ tail of the exponential expression. In some cases, we allow ourselves to select adaptively, by discarding a tail of wavelet components that is over-fitting the training data, but increasing the error on validation set samples (see Figure 2). However, in cases where the goal is to demonstrate understanding of generalization we restrict the analysis to only using the training set and then pre-select (e.g. in the experiments we review below).

(ii) Notice that since each representation space can be of very different dimension, it is crucial that the method is invariant under different dimension embedding.

(iii) We note that this approach to compute the geometric Besov smoothness of a labeled dataset is a significant generalization of the method used in [20] to compute the (classical) Besov smoothness of a single image. Nevertheless, there is a distinct similar underlying function space approach.

5 Applications and Experimental Results

In all of the experiments we used TensorFlow networks models. we extracted the representation of any data sample (e.g. image) in any layer of a network, by simply running the TensorFlow ‘Session’ object with the given layer and the data sample as the parameters.

The computation of the Besov smoothness index in a given feature space is implemented as explained in Section 4. We used an updated version of the code of [22] which is available via the link in [48]. For the hyper-parameter that determines the number of -term errors (4.1), , which are used to model the -smoothness, we used . The code was executed on the Amazon Web Services cloud, on r3.8xlarge configurations that have 32 virtual CPUs and 244 GB of memory. We note that computing the smoothness of all the representation of a certain dataset of images over all layers requires significant computation. One needs to create a RF approximation of the representation at each layer, sort the wavelet components based on their norms and compute the errors -term errors (4.1), , before the numeric fit of the smoothness index can be computed. Thus, in our experiments, we computed and used for the smoothness fit only the errors , , to speed up the computation.

5.1 Smoothness analysis across deep learning layers

We now present results for estimates of smoothness analysis in layer representations for some datasets and trained networks. We begin with the audio dataset “Urban Sound Classification” from [46]. We applied our smoothness analysis on representations of the “Urban8K” audio data at the layers of the DeepListen model [19] which achieves an accuracy of

. The network is a simple feed-forward of 4 fully connected layers with ReLU non-linearities. As described in Section

4, we created a functional representation of the data at each layer and estimated the Besov smoothness index at each layer. In Figure 3 we see how the clustering is ‘unfolded’ by the network, as the Besov -index increases from layer to layer.

Fig. 3: Smoothness analysis of the layer representations of “Urban8K” using the DeepListen [19] fully-connected architecture

Next we present results on image datasets. We trained the network [44] on the CIFAR10 image dataset [8]. As described in [44], the images were cropped to size

. The network has 2 convolution layers (with 9216 and 2304 features, respectively) and 2 fully connected layers (with 384 and 192 features, respectively) with an additional soft-max layer (with final layer ‘logits’ of 10 classes). The training set data size is 50,000 and the testing 10,000. As expected

[44], the trained network achieves accuracy on the testing data. In Figure 4

we see a clear indication of how the smoothness begins to evolve during the training after 20 epochs and the ‘unfolding’ of the clustering improves from layer to layer. We also see that the smoothness improves after 50 epochs, correlating with the improvement of the accuracy.

Fig. 4: Smoothness analysis of DL layers representations of CIFAR10

We now describe our experiments with the well-known MNIST dataset of 60,000 training and 10,000 testing images [36]. The DL network configuration we used is the ‘textbook’ version of [37], which is composed of two convolution layers and two fully connected layers. Training the model for 100 epochs produce a model with accuracy on the training data and a clear monotone increase of Besov smoothness across layers as shown in Figure 5.

Fig. 5: Smoothness analysis of DL layers representations of MNIST

5.2 Smoothness analysis of mis-labeled datasets

Following [49], we applied random mis-labeling to the MNIST and CIFAR10 image sets at various levels. We randomly picked subsets of size of the size of dataset, with , and then for each image in this subset we picked a random label. We then trained the network of [37] on the misclassified MNIST datasets and the network of [44] on the misclassified CIFAR10 set. We emphasize that the goal of this experiment is to understand generalization [49] and automatically detect the level of corruption solely from the smoothness analysis of the training data. Recall from [49] that a network can converge relatively quickly to an over-fit even on highly mis-classified training sets. Thus, convergence is not a good indication to the generalization capabilities and specifically to the level of mis-labeling in the training data.

Next, we created a wavelet decomposition of RF on the representation of the training set at the last inner layer of the network. Typically, this is the fully connected layer right before the softmax. In Figure 6 we see the decay of the precision error of the adaptive wavelet approximations (2.10) as we add more wavelet terms. It is clear that datasets with less mis-labeling have more ‘sparsity’, i.e., are better approximated with less wavelet terms. We also measured for each mis-labeled training dataset the Besov -smoothness at the last inner layer. The results are presented in Table I. We see a strong correlation between the amount of mis-labeling and the smoothness.

Fig. 6: Precision error decay with adaptive wavelet approximation on mis-labeled MNIST
Mis-labeling 0% 10% 20% 30% 40%
MNIST smoothness 0.28 0.106 0.084 0.052 0.03
CIFAR10 smoothness 0.204 0.072 0.053 0.051 0.003
TABLE I: Smoothness analysis of mis-labeled image images

6 Conclusion

In this paper we presented a theoretical approach to the analysis of the performance of the hidden layers in DL architectures. We plan to continue the experimental analysis of deeper architectures and larger datasets (see e.g. [43]) and hope to demonstrate that our approach is applicable to a wide variety of machine learning and deep learning architectures. As some advanced DL architectures have millions of features in their hidden layers, we will need to overcome the problem of estimating representation smoothness in such high dimensions. Furthermore, in some of our experiments we noticed interesting phenomena within sub-components of the layers (e,g, the different operations of convolution, non-linearity and pooling). We hope to reach some understanding and share some insights regarding these aspects too.

[Proof of Theorem 3.7]

Obviously, it is sufficient to prove the equivalence for a single tree . Observe that condition (3.7) also implies that for any , with parent , we also have

We use this as well as (2.5) to prove the first direction of the equivalence as follows

We now prove the other direction. We assume (the case is similar). For any we have

(.1)

by the following estimates

Also, observe that by condition (3.7), for any

(.2)

We apply (.1) and (.2) to conclude

Acknowledgments

The authors would like to thank Vadym Boikov, WIX AI and Kobi Gurkan, Tel-Aviv University, for their help with running the experiments. This research was carried out with the generous support of the Amazon AWS Research Program.

References

  • [1] Alani D., Averbuch A. and Dekel S., Image coding using geometric wavelets, IEEE transactions on image processing 16:69-77, 2007.
  • [2] Alpaydin E., Introduction to machine learning, MIT Press, 2004.
  • [3] Bengio Y., Courville A. and Vincenty P., Representation Learning: A Review and New Perspectives, IEEE Transactions on Pattern Analysis and Machine Intelligence 8:1798-1828, 2013.
  • [4] Biau G. and Scornet E., A random forest guided tour, TEST 25(2):197-227, 2016.
  • [5] Breiman L., Random forests, Machine Learning 45:5-32, 2001.
  • [6] Breiman L., Bagging predictors, Machine Learning 24(2):123-140, 1996.
  • [7] Breiman L, Friedman J., Stone C. and Olshen R., Classification and Regression Trees, Chapman and Hall/CRC, 1984.
  • [8] CIFAR10 image dataset
    https://www.cs.toronto.edu/ kriz/cifar.html.
  • [9] Chen H., Tino P. and Yao X., Predictive Ensemble Pruning by Expectation Propagation, IEEE journal of knowledge and data engineering 21:999-1013, 2009.
  • [10] Chollet F., The limitations of deep learning, https://blog.keras.io/the-limitations-of-deep-learning.html, 2017.
  • [11]

    Criminisi A., Shotton J. and Konukoglu E., Forests for Classification, Regression, Density Estimation, Manifold Learning and Semi-Supervised Learning,

    Microsoft Research technical report, report TR-2011-114, 2011.
  • [12] Dahmen W., Dekel S. and Petrushev P., Two-level-split decomposition of anisotropic Besov spaces, Constructive approximation 31:149-194, 2001.
  • [13] Daubechies I., Ten lectures on wavelets, CBMS-NSF Regional Conference Series in Applied Mathematics,1992.
  • [14] Dekel S., Elisha O. and Morgan O., Wavelet decomposition of Gradient Boosting, submitted.
  • [15] Dekel S. and Leviatan D., Adaptive multivariate approximation using binary space partitions and geometric wavelets, SIAM Journal on Numerical Analysis 43:707-732, 2005.
  • [16] Denil M., Matheson D. and De Freitas N., Narrowing the gap Random forests in theory and in practice, In Proceedings of the 31st International Conference on Machine Learning 32, 2014.
  • [17] DeVore R., Nonlinear approximation, Acta Numerica 7:51-150, 1998.
  • [18] DeVore R. and Lorentz G., Constructive approximation, Springer Science and Business, 1993.
  • [19] DeepListen
    https://github.com/jaron/deep-listening
  • [20] DeVore R., Jawerth B. and Lucier B., Image compression through wavelet transform coding, IEEE transactions on information theory 38(2):719-746, 1992.
  • [21] Du W. and Zhan Z., Building decision tree classifier on private data, In Proceedings of the IEEE international conference on Privacy, security and data mining 14:1-8, 2002.
  • [22] Elisha O. and Dekel S., Wavelet decompositions of Random Forests - smoothness analysis,sparse approximation and applications, Journal of machine learning research 17: 1-38, 2016.
  • [23] Feng N., Wang J. and Saligrama V., Feature-Budgeted Random Forest, In Proceedings of The 32nd International Conference on Machine Learning, 1983-1991, 2015.
  • [24] Kelley P. and Barry R., Sparse spatial autoregressions,

    Statistics and Probability Letters

    33(3):291-297, 1997.
  • [25] Genuer R., Poggi J. and Christine T., Variable selection using Random Forests, Pattern Recognition Letters 31(14): 2225-2236, 2010.
  • [26] Geurts P. and Gilles L., Learning to rank with extremely randomized trees, In JMLR: Workshop and Conference Proceedings 14:49-61, 2011.
  • [27] Guyon I. and Elisseff A., An introduction to variable and feature selection, Journal of Machine Learning Research 3:1157-1182, 2003.
  • [28] Hastie T., Tibshirani R. and Friedman J., The elements of statistical learning, Springer, 2009.
  • [29] Joly A., Schnitzler F.,Geurts P. and Wehenkel L., L1-based compression of random forest models, In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, 375-380, 2012.
  • [30] Karaivanov B. and Petrushev P., Nonlinear piecewise polynomial approximation beyond Besov spaces, Applied and computational harmonic analysis 15:177-223, 2003.
  • [31] Kulkarni V. and Sinha P., Pruning of Random Forest classifiers: A survey and future directions, In

    International Conference on data science and engineering

    , 64-68, 2012.
  • [32] Loh W., Classification and regression trees, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 1(1):14-23, 2011.
  • [33] Louppe G., Wehenkel L., Sutera A. and Geurts P., Understanding variable importances in forests of randomized trees, Advances in Neural Information Processing Systems 26:431-439, 2013.
  • [34] Mallat S., A Wavelet tour of signal processing, 3rd edition (the sparse way), Acadmic Press, 2009.
  • [35] Martinez-Muoz G., Hernández-Lobato D. and Suarez A., An analysis of ensemble pruning techniques based on ordered aggregation, IEEE Transactions on pattern analysis and machine intelligence 31:245-259, 2009.
  • [36] MNIST data set,
    http://yann.lecun.com/exdb/mnist/
  • [37] TensorFlow Tutorial: Deep MNIST for experts,
    https://www.tensorflow.org/get started/mnist/pros
  • [38]

    Papyan V., Romano Y. and Elad M., Convolutional Neural Networks Analyzed via Convolutional Sparse Coding,

    Submitted, 2016.
  • [39] Raileanu L. and Stoffel K., Theoretical comparison between the Gini index and information gain criteria,

    Annals of Mathematics and Artificial Intelligence

    41(1):77-93, 2004.
  • [40] Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification Salamon J. and Bello J. P., IEEE Signal Processing Letters 24(3):279 - 283, 2017.
  • [41] Salembier P. and Garrido L., Binary partition tree as an efficient representation for image processing, segmentation, and information retrieval, IEEE transactions on image processing9:561-576, 2000.
  • [42] Schwartz-Ziv R. and Tishbi N., Opening the black box of Deep Neural Networks via Information, preprint.
  • [43] Sun C., Shrivastava1 A., Singh S. and Gupta1 A., Revisiting Unreasonable Effectiveness of Data in Deep Learning Era, preprint.
  • [44] AlexaNet implementation using TensorFlow,
    https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10.py
  • [45] UCI machine learning repository, http://archive.ics.uci.edu/ml/.
  • [46] Urban Sound Classification dataset,
    https://serv.cusp.nyu.edu/projects/urbansounddataset
  • [47] Urban Sound Classification CNN implementation
    https://github.com/jaron/deep-listening/blob/master/4-us8k-cnn-salamon.ipynb
  • [48] Wavelet-based Random Forest source code, https://github.com/orenelis/WaveletsForest.git.
  • [49] Zhang C., Bengio S., Hardt M., Recht B. and Vinyals O., Understanding deep learning requires rethinking generalization, In ICLR 2017 conference proceedings.