One of the most significant recent developments in machine learning has been the resurgence of “deep learning”, usually in the form of artificial neural networks. These systems are based on a multi-layered architecture, where the input goes through several transformations, with higher-level concepts derived from lower-level ones. Thus, these systems are considered to be particularly suitable for hard AI tasks, such as computer vision and language processing.
The history of such multi-layered systems is long and uneven. They have been extensively studied in the 80’s and early 90’s, but with mixed success, and were eventually displaced to a large extent by shallow architectures such as the Support Vector Machine (SVM) and boosting algorithms. These shallow architectures not only worked well in practice, but also came with provably correct and computationally efficient training algorithms, requiring tuning of only a small number of parameters - thus allowing them to be incorporated into standard software packages.
However, in recent years, a combination of algorithmic advancements, as well as increasing computational power and data size, has led to a breakthrough in the effectiveness of neural networks, and deep learning systems have shown very impressive practical performance on a variety of domains (a few examples include [15, 12, 19, 5, 7, 16, 14] as well as  and references therein). This has led to a resurgence of interest in such learning systems.
Nevertheless, a major caveat of deep learning is - and always has been - its strong reliance on heuristic methods. Despite decades of research, there is no clear-cut guidance on how one should choose the architecture and size of the network, or the type of computations it performs. Even when these are chosen, training these networks involves non-convex optimization problems, which are often quite difficult. No worst-case guarantees are possible, and pulling it off successfully is still much of a black art, requiring specialized expertise and much manual work.
In this note, we propose an efficient algorithm to build and train a deep network for supervised learning, with some formal guarantees. The algorithm has the following properties:
It constructs a deep architecture, one which relies on its multi-layered structure in order to compactly represent complex predictors.
It provably runs in polynomial time, and is amenable to theoretical analysis and study. Moreover, the algorithm does not rely on complicated heuristics, and is easy to implement.
The algorithm is a universal learner, in the sense that the training error is guaranteed to decrease as the network increases in size, ultimately reaching zero under mild conditions.
In its basic idealized form, the algorithm is parameter-free. The network is grown incrementally, where each added layer decreases the bias while increasing the variance. The process can be stopped once satisfactory performance is obtained. The architectural details of the network are automatically determined by theory. We describe a more efficient variant of the algorithm, which requires specifying the maximal width of the network in advance. Optionally, one can do additional fine-tuning (as we describe later on), but our experimental results indicate that even this rough tuning is already sufficient to get promising results.
The algorithm we present trains a particular type of deep learning system, where each computational node computes a linear or quadratic function of its inputs. Thus, the predictors we learn are polynomial functions over the input space (which we take here to be ). The networks we learn are also related to sum-product networks, which have been introduced in the context of efficient representations of partition functions [18, 8].
The derivation of our algorithm is inspired by ideas from , used there for a different purpose. At its core, our method attempts to build a network which provides a good approximate basis for the values attained by all polynomials of bounded degree over the training instances. Similar to a well-known principle in modern deep learning, the layers of our network are built one-by-one, creating higher-and-higher level representations of the data. Once such a representation is built, a final output layer is constructed by solving a simple convex optimization problem.
The rest of the paper is structured as follows. In Sec. 2, we introduce notation. The heart of our paper is Sec. 3, where we present our algorithm and analyze its properties. In Sec. 4, we discuss sample complexity (generalization) issues. In Sec. 5 we compare our deep architecture for learning polynomials to the shallow architecture obtained by kernel learning. In Sec. 6, we present preliminary experimental results.
We use bold-face letters to denote vectors. In particular, denotes the all-ones vector. For any two vectors , , we let denote their Hadamard product, namely the vector . refers to the Euclidean norm. refers to the indicator function.
For two matrices with the same number of rows, we let denote the new matrix formed by concatenating the columns of . For a matrix , refers to the entry in row and column ; refers to its -th column; and refers to the number of columns.
We assume we are given a labeled training data , where each is in , and is a scalar label/target value. We let denote the matrix such that , and is the vector . For simplicity of presentation, we will assume that , but note that for most results this can be easily relaxed.
Given a vector of predicted values on the training set (or a matrix in a multi-class prediction setting), we use to denote the training error, which is assumed to be a convex function of . Some examples include:
Multiclass hinge loss: (here, is the confidence score for instance being in class )
Moreover, in the context of linear predictors, we can consider regularized loss functions, where we augment the loss by a regularization term such as(where is the linear predictor) for some parameter .
Multivariate polynomials are functions over , of the form
where ranges over all -dimensional vectors of positive integers, such that , and is the degree of the polynomial. Each term is a monomial of degree .
To represent our network, we let refer to the -th node in the -th layer, as a function of its inputs. In our algorithm, the function each node computes is always either a linear function, or a weighted product of two inputs:
where . The depth of the network corresponds to the number of layers, and the width corresponds to the largest number of nodes in any single layer.
3 The Basis Learner: Algorithm and Analysis
We now turn to develop our Basis Learner algorithm, as well as the accompanying analysis. We do this in three stages: First, we derive a generic and idealized version of our algorithm, which runs in polynomial time but is not very practical; Second, we analyze its properties in terms of time complexity, training error, etc.; Third, we discuss and analyze a more realistic variant of our algorithm, which also enjoys some theoretical guarantees, generalizes better, and is more flexible in practice.
3.1 Generic Algorithm
Recall that our goal is to learn polynomial predictors, using a deep architecture, based on a training set with instances . However, let us ignore for now the learning aspect and focus on a representation problem: how can we build a network capable of representing the values of any polynomial over the instances?
At first glance, this may seem like a tall order, since the space of all polynomials is not specified by any bounded number of parameters. However, our first crucial observation is that we care (for now) only about the values on the training instances. We can represent these values as -dimensional vectors in . Moreover, we can identify each polynomial with its values on the training instances, via the linear projection
Since the space of all polynomials can attain any set of values on a finite set of distinct points , we get that polynomials span via this linear projection. By a standard result from linear algebra, this immediately implies that there are polynomials , such that form a basis of - we can write any set of values as a linear combination of these. Formally, we get the following:
Suppose are distinct. Then there exist polynomials , such that: form a basis of .
Hence, for any set of values , there is a coefficient vector , so that for all .
This lemma implies that if we build a network, which computes such polynomials
, then we can train a simple linear classifier on top of these outputs, which can attain any target values over the training data.
While it is nice to be able to express any target values as a function of the input instances , such an expressive machine will likely lead to overfitting. Our generic algorithm builds a deep network such that the nodes of the first layers form a basis of all values attained by degree- polynomials. Therefore, we start with a simple network, which might have a large bias but will tend not to overfit (i.e. low variance), and as we make the network deeper and deeper we gradually decrease the bias while increasing the variance. Thus, in principle, this algorithm can be used to train the natural curve of solutions that can be used to control the bias-variance tradeoff.
It remains to describe how we build such a network. First, we show how to construct a basis which spans all values attained by degree-1 polyonomials (i.e. linear functions). We then show how to enlarge this to a basis of all values attained by degree-2 polynomials, and so on. Each such enlargement of the degree corresponds to another layer in our network. Later, we will prove that each step can be calculated in polynomial time and the whole process terminates after a polynomial number of iterations.
3.1.1 Constructing the First Layer
The set of values attained by degree-1 polynomials (linear) functions over the data is
which is a -dimensional linear subspace of . Thus, to
construct a basis for it, we only need to find vectors
, so that the set of vectors
are linearly independent. This can be done in many ways. For example, one can
construct an orthogonal basis to Eq. (2), using Gram-Schmidt or
SVD (equivalently, finding a matrix , so that
has orthogonal columns)111This is
essentially the same as the first step of the VCA algorithm in
 . Moreover, it is very similar to performing Principal
Component Analysis (PCA) on the data, which is a often a standard
first step in learning. It differs from PCA in that the SVD is done
on the augmented matrix
. Moreover, it is very similar to performing Principal Component Analysis (PCA) on the data, which is a often a standard first step in learning. It differs from PCA in that the SVD is done on the augmented matrix, rather than on a centered version of . This is significant here, since the columns of a centered data matrix cannot express the vector, hence we cannot express the constant polynomial on the data.. At this stage, our focus is to present our approach in full generality, so we avoid fixing a specific basis-construction method.
Whatever basis-construction method we use, we end up with some linear transformation (specified by a matrix), which maps into the constructed basis. The columns of specify the linear functions forming the first layer of our network: For all , the ’th node of the first layer is the function
and we have the property that is a basis for all values attained by degree-1 polynomials over the training data. We let denote the matrix222If the data lies in a subspace of then the number of columns of will be the dimension of this subspace plus . whose columns are the vectors of this set, namely, .
3.1.2 Constructing The Second Layer
So far, we have a one-layer network whose outputs span all values attained by linear functions on the training instances. In principle, we can use the same trick to find a basis for degree- polynomials: For any degree polynomial, consider the space of all values attained by such polynomials over the training data, and find a spanning basis. However, we quickly run into a computational problem, since the space of all degree polynomials in () increases exponentially in , requiring us to consider exponentially many vectors. Instead, we utilize our deep architecture to find a compact representation of the required basis, using the following simple but important observation:
Any degree polynomial can be written as
where are degree-1 polynomials, are degree- polynomials, and is a polynomial of degree at most .
Any polynomial of degree can be written as a weighted sum of monomials of degree , plus a polynomial of degree . Moreover, any monomial of degree can be written as a product of a monomial of degree and a monomial of degree . Since -degree monomials are in particular -degree polynomials, the result follows. ∎
The lemma implies that any degree-2 polynomial can be written as the sum of products of degree-1 polynomials, plus a degree-1 polynomial. Since the nodes at the first layer of our network span all degree-1 polynomials, they in particular span the polynomials , so it follows that any degree-2 polynomial can be written as
where all the ’s are scalars. In other words, the vector of values attainable by any degree-2 polynomial is in the span of the vector of values attained by nodes in the first layer, and products of the outputs of every two nodes in the first layer.
Let us now switch back to an algebraic representation. Recall that in constructing the first layer, we formed a matrix , whose columns span all values attainable by degree-1 polynomials. Then the above implies that the matrix , where
spans all possible values attainable by degree-2 polynomials. Thus, to get a basis for the values attained by degree-2 polynomials, it is enough to find some column subset of , so that ’s columns are a linearly independent basis for ’s columns. Again, this basis construction can be done in several ways, using standard linear algebra (such as a Gram-Schmidt procedure or more stable alternative methods). The columns of (which are a subset of the columns of ) specify the 2nd layer of our network: each such column, which corresponds to (say) , corresponds in turn to a node in the 2nd layer, which computes the product of nodes and in the first layer. We now redefine to be the augmented matrix .
3.1.3 Constructing Layer 3,4,…
It is only left to repeat this process. At each iteration , we maintain a matrix , whose columns form a basis for the values attained by all polynomials of degree . We then consider the new matrix
and find a column subset so that the columns of form a basis for the columns of . We then redefine , and are assured that the columns of span the values of all polynomials of degree over the data. By adding this newly constructed layer, we get a network whose outputs form a basis for the values attained by all polynomials of degree over the training instances.
To maintain numerical stability, it may be desirable to multiply each column of by a normalization factor, e.g. by scaling each column
so that the second momentacross the column is (otherwise, the iterated products may make the values in the matrix very large or small). Overall, we can specify the transformation from to via a matrix of size , so that for any ,
As we will prove later on, if at any stage the subspace spanned by is the same as the subspace spanned by , then our network can span the values of all polynomials of any degree over the training data, and we can stop the process.
The process (up to the creation of the output layer) is described in Figure 1, and the resulting network architecture is shown in Figure 2. We note that the resulting network has a feedforward architecture. The connections, though, are not only between adjacent layers, unlike many common deep learning architectures. Moreover, although we refrain from fixing the basis creation methods at this stage, we provide one possible implementation in Figure 3. We emphasize, though, that other variants are possible, and the basis construction method can be different at different layers.
Constructing the Output Layer
After iterations (for some ), we end up with a matrix , whose columns form a basis for all values attained by polynomials of degree over the training data. Moreover, each column is exactly the values attained by some node in our network over the training instances. On top of this network, we can now train a simple linear predictor , miniming some convex loss function . This can be done using any convex optimization procedure. We are assured that for any polynomial of degree at most , there is some such linear predictor which attains the same value as this polynomial over the data. This linear predictor forms the output layer of our network.
As mentioned earlier, the inspiration to our approach is based on , which present an incremental method to efficiently build a basis for polynomial functions. In particular, we use the same basic ideas in order to ensure that after iterations, the resulting basis spans all polynomials of degree at most . While we owe a lot to their ideas, we should also emphasize the differences: First, the emphasis there is to find a set of generators for the ideal of polynomials vanishing on the training set. Second, their goal there has nothing to do with deep learning, and the result of the algorithm is a basis rather than a deep network. Third, they build the basis in a different way than ours (forcing orthogonality of the basis components), which does not seem as effective in our context (see end of section Sec. 6). Fourth, the practical variant of our algorithm, which is described further on, is very different than the methods used in .
Before continuing with the analysis, we make several important remarks:
Remark 1 (Number of layers does not need to be fixed in advance).
Each iteration of the algorithm corresponds to another layer in the network. However, note that we do not need to specify the number of iterations. Instead, we can simply create the layers one-by-one, each time attempting to construct an output layer on top of the existing nodes. We then check the performance of the resulting network on a validation set, and stop once we reach satisfactory performance. See Figure 2 for details.
Remark 2 (Flexibility of loss function).
Compared to our algorithm, many standard deep learning algorithms are more
constrained in terms of the loss function, especially those that directly
attempt to minimize training error. Since these algorithms solve hard,
non-convex problems, it is important that the loss will be as “nice” and
smooth as possible, and they often focus on the squared loss (for example,
the famous backpropagation algorithm
Compared to our algorithm, many standard deep learning algorithms are more constrained in terms of the loss function, especially those that directly attempt to minimize training error. Since these algorithms solve hard, non-convex problems, it is important that the loss will be as “nice” and smooth as possible, and they often focus on the squared loss (for example, the famous backpropagation algorithm is tailored for this loss). In contrast, our algorithm can easily work with any convex loss.
Remark 3 (Choice of Architecture).
In the intermediate layers we proposed constructing a basis for the columns of by using the columns of and a column subset of . However, this is not the only way to construct a basis. For example, one can try and find a full linear transformation so that the columns of form an orthogonal basis to . However, our approach combines two important advantages. On one hand, it creates a network with few connections where most nodes depend on the inputs of only two other nodes. This makes the network very fast at test-time, as well as better-generalizing in theory and in practice (see Sec. 4 and Sec. 6 for more details). On the other hand, it is still sufficiently expressive to compactly represent high-dimensional polynomials, in a product-of-sums form, whose expansion as an explicit sum of monomials would be prohibitively large. In particular, our network computes functions of the form , which involve exponentially many monomials. The ability to compactly represent complex concepts is a major principle in deep learning . This is also why we chose to use a linear transformation in the first layer - if all non-output layers just compute the product of two outputs from the previous layers, then the resulting predictor is limited to computing polynomials with a small number of monomials.
Remark 4 (Connection to Algebraic Geometry).
Our algorithm has some deep connections to algebraic geometry and
interpolation theory. In particular, the problem of finding a basis
for polynomial functions on a given set has been well studied in
these areas for many years. However, most methods we are aware of -
such as construction of Newton Basis polynomials or multivariate
extensions of standard polynomial interpolation methods
Our algorithm has some deep connections to algebraic geometry and interpolation theory. In particular, the problem of finding a basis for polynomial functions on a given set has been well studied in these areas for many years. However, most methods we are aware of - such as construction of Newton Basis polynomials or multivariate extensions of standard polynomial interpolation methods - are not computationally efficient, i.e. polynomial in the dimension and the polynomial degree . This is because they are based on explicit handling of monomials, of which there are . Efficient algorithms have been proposed for related problems, such as the Buchberger-Möller algorithm for finding a set of generators for the ideal of polynomials vanishing on a given set (see [10, 1, 17] and references therein). In a sense, our deep architecture is “orthogonal” to this approach, since we focus on constructing a bsis for polynomials that do not vanish on the set of points. This enables us to find an efficient, compact representation, using a deep architecture, for getting arbitrary values over a training set.
After describing our generic algorithm and its derivation, we now turn to prove its formal properties. In particular, we show that its runtime is polynomial in the training set size and the dimension , and that it can drive the training error all the way to zero. In the next section, we discuss how to make the algorithm more practical from a computational and statistical point of view.
Given a training set , where are distinct points in , suppose we run the algorithm in Figure 1, constructing a network of total depth . Then:
, , , .
The algorithm terminates after at most iterations of the For loop.
Assuming (for simplicity) , the algorithm can be implemented using at most memory and time, plus the polynomial time required to solve the convex optimization problem when computing the output layer.
The network constructed by the algorithm has at most layers, width at most , and total number of nodes at most . The total number of arithmetic operations (sums and products) performed to compute an output is .
At the end of iteration , ’s columns span all values attainable by polynomials of degree on the training instances.
The training error of the network created by the algorithm is monotonically decreasing in . Moreover, if there exists some vector of prediction values such that , then after at most iterations, the training error will be .
In item 6, we note that the assumption on is merely to simplify the presentation. A more precise statement would be that we can get the training error arbitrarily close to - see the proof for details.
The theorem is mostly an easy corollary of the derivation.
As to item 1, since we maintain the -dimensional columns of and each to be linearly independent, there cannot be more than of them. The bound on follows by construction (as we orthogonalize a matrix with columns), and the bound on now follows by definition of .
As to item 2, the algorithm always augments by , and breaks whenever . Since can have at most columns, it follows the algorithm cannot run more than iterations. The algorithm also terminates after at most iterations, by definition.
As to item 3, the memory bound follows from the bounds on the sizes of , and the associated sizes of the constructed network. Note that can require as much as memory, but we don’t need to store it explicitly - any entry in is specified as a product of two entries in and , which can be found and computed on-the-fly in time. As to the time bound, each iteration of our algorithm involves computations polynomial in , with the dominant factors being the BuildBasis and BuildBasis. The time bounds follow from the the implementations proposed in Figure 3, using the upper bounds on the sizes of the relevant matrices, and the assumption that .
As to item 4, it follows from the fact that in each iteration, we create layer with at most new nodes, and there are at most iterations/layers excluding the input and output layers. Moreover, each node in our network (except the output node) corresponds to a column in , so there are at most nodes plus the output nodes. Finally, the network computes a linear transformation in , then at most nodes perform products each, and a final output node computes a weighted linear combination of the output of all other nodes (at most ) - so the number of operations is .
As to item 5, it follows immediately by the derivation presented earlier.
Finally, we need to show item 6. Recall from the derivation that in the output layer, we use the linear weights which minimize . If we increase the depth of our constructed network, what happens is that we augment by more and more linearly independent columns, the initial columns being exactly the same. Thus, the size of the set of prediction vectors only increases, and the training error can only go down.
If we run the algorithm till , then the columns of span , since the columns of are linearly independent. Hence . This implies that we can always find such that , where , so the training error is zero. The only case left to treat is if the algorithm stops when . However, we claim this can’t happen. This can only happen if after the basis construction process, namely that ’s columns already span the columns of . However, this would imply that we can span the values of all degree- polynomials on the training instances, using polynomials of degree . But using Lemma 2, it would imply that we could write the values of every degree- polynomial using a linear combination of polynomials of degree . Repeating this, we get that the values of polynomials of degree are all spanned by polynomials of degree . However, the values of all polynomials of any degree over distinct points must span , so we must have . ∎
An immediate corollary of this result is the following:
Remark 5 (The Basis Learner is a Universal Learner).
Our algorithm is a universal algorithm, in the sense that as we run it for more and more iterations, the training error provably decreases, eventually hitting zero. Thus, we can get a curve of solutions, trading-off between the training error on one hand and the size of the resulting network (as well as the potential of overfitting) on the other hand.
3.3 Making the Algorithm Practical
While the algorithm we presented runs in provable polynomial time, it has some important limitations. In particular, while we can always control the depth of the network by early stopping, we do not control its width (i.e. the number of nodes created in each layer). In the worst case, it can be as large as the number of training instances . This has two drawbacks:
The algorithm can only be used for small datasets - when is large, we might get huge networks, and running the algorithm will be computationally prohibitive, involving manipulations of matrices of order .
Even ignoring computational constraints, the huge network which might be created is likely to overfit.
To tackle this, we propose a simple modification of our scheme, where the network width is explicitly constrained at each iteration. Recall that the width of a layer constructed at iteration is equal to the number of columns in . Till now, was such that the columns of span the column space of . So if is large, might be large as well, resulting in a wide layer with many new nodes. However, we can give up on exactly spanning , and instead seek to “approximately span” it, using a smaller partial basis of bounded size , resulting in a layer of width .
The next natural question is how to choose this partial basis. There are several possible criterions, both supervised and unsupervised. We will focus on the following choice, which we found to be quite effective in practice:
The first layer computes a linear transformation which transforms the augmented data matrix into its first leading singular vectors (this is closely akin - although not identical - to Principal Component Analysis (PCA) - see Footnote 1).
The next layers use a standard Orthogonal Least Squares procedure  to greedily pick the columns of which seem most relevant for prediction. The intuition is that we wish to quickly decrease the training error, using a small number of new nodes and in a computationally cheap way. Specifically, for binary classification and regression, we consider the vector of training labels/target values, and iteratively pick the column of whose residual (after projecting on the existing basis ) is most correlated with the residual of (again, after projecting on the existing basis ). The column is then added to the existing basis, and the process repeats itself. A simple extension of this idea can be applied to the multiclass case. Finally, to speed-up the computation, we can process the columns of in mini-batches, where each time we find and add the () most correlated vectors before iterating.
These procedures are implemented via the subroutines BuildBasis and BuildBasis, whereas the main algorithm (Figure 1) remains unaffected. A precise pseudo-code appears in Figure 4. We note that in a practical implementation of the pseudo-code, we do not need to explicitly compute the potentially large matrices - we can simply compute each column and its associated correlation score one-by-one, and use the list of scores to pick and re-generate the most correlated columns.
We now turn to discuss the theoretical properties of this width-constrained variant of our algorithm. Recall that in its idealized version, the Basis Learner is guaranteed to eventually decrease the training error to zero in all cases. However, with a width constraint, there are adversarial cases where the algorithm will get “stuck” and will terminate before the training error gets to zero. This may happen when , and all the columns of are spanned by
, so no new linearly independent vectors can be added to, will be zero, and the algorithm will terminate (see Figure 1). However, we have never witnessed this happen in any of our experiments, and we can prove that this is indeed the case as long as the input instances are in “general position” (which we shortly formalize). Thus, we get a completely analogous result to Thm. 1, for the more practical variant of the Basis Learner.
Intuitively, the general position condition we require implies that if we take any two columns in , and then the product vector is linearly independent from the columns of . This is intuitively plausible, since the entry-wise product is a highly non-linear operation, so in general there is no reason that will happen to lie exactly at the subspace spanned by ’s columns. More formally, we use the following:
Let be a set of distinct points in . We say that are in M-general position if for every monomials, , the matrix defined as has rank .
Given a training set , where are distinct points in , suppose we run the algorithm in Figure 1, with the subroutines implemented in Figure 4, using a uniform value for the width and batch size , constructing a network of depth . Then:
, , .
Assume (for simplicity) that , and the case of regression or classification with a constant number of classes. Then the algorithm can be implemented using at most memory and time, plus the polynomial time required to solve the convex optimization problem when computing the output layer, and the SVD in CreateBasis (see remark below).
The network constructed by the algorithm has at most layers, with at most nodes in each layer. The total number of nodes is at most . The total number of arithmetic operations (sums and products) performed to compute an output is .
The training error of the network created by the algorithm is monotonically decreasing in .
If the rows of the matrix returned by the width-limited variant of BuildBasis are in M-general position, is unconstrained, and there exists some vector of prediction values such that , then after at most iterations, the training error will be .
The proof of the theorem, except part 5, is a simple adaptation of the proof of Thm. 1, using our construction and the remarks we made earlier. So, it is only left to prove part 5. The algorithm will terminate before driving the error to zero if at some iteration we have that the columns of are spanned by and . But, by construction, this implies that there are monomials such that if we apply them on the rows of
, we obtain linearly dependent vectors. This contradicts the assumption that the rows ofare in M-general position and concludes our proof. ∎
We note that in item 2, the SVD mentioned is over an matrix, which requires time to perform exactly. However, one can use randomized approximate SVD procedures (e.g. ) to perform the computation in time. While not exact, these approximate methods are known to perform very well in practice, and in our experiments we observed no significant degradation by using them in lieu of exact SVD. Overall, for fixed , this allows our Basis Learner algorithm to construct the network in time linear in the data size.
Overall, compared to Thm. 1, we see that our more practical variant significantly lowers the memory and time requirements (assuming are small compared to ), and we still have the property that the training error decreases monotonically with the network depth, and reduces to zero under mild conditions that are likely to hold on natural datasets.
Before continuing, we again emphasize that our approach is quite generic, and that the criterions we presented in this section, to pick a partial basis at each iteration, are by no means the only ones possible. For example, one can use other greedy selection procedures to pick the best columns in BuildBasis, as well as unsupervised methods. Similarly, one can use supervised methods to construct the first layer. Also, the width of different layers may differ. However, our goal here is not to propose the most sophisticated and best-performing method, but rather demonstrate that using our approach, even with very simple regularization and greedy construction methods, can have good theoretical guarantees and work well experimentally. Of course, much work remains in trying out other methods.
4 Sample Complexity
So far, we have focused on how the network we build reduces the training error. However, in a learning context, what we are actually interested in is getting good generalization error, namely good prediction in expectation over the distribution from which our training data was sampled.
We can view our algorithm as a procedure which given training data, picks a network of width and depth . When we use this network for binary classification (e.g. by taking the sign of the output to be the predicted label), a relevant measure of generalization performance is the VC-dimension of the class of such networks. Luckily, the VC-dimension of neural networks is a well-studied topic. In particular, by Theorem 8.4 in , we know that any binary function class in Euclidean space, which is parameterized by at most parameters and each function can be specified using at most addition, multiplication, and comparison operations, has VC dimension at most . Our network can be specified in this manner, using at most operations and parameters (see Thm. 2). This immediately implies a VC dimension bound, which ensures generalization if the training data size is sufficiently large compared to the network size. We note that this bound is very generic and rather coarse - we suspect that it can be substantially improved in our case. However, qualitatively speaking, it tells us that reducing the number of parameters in our network reduces overfitting. This principle is used in our network architecture, where each node in the intermediate layers is connected to just other nodes, rather than (say) all nodes in the previous layer.
As an interesting comparison, note that our network essentially computes a -degree polynomial, yet the VC dimension of all -degree polynomial in is , which grows very fast with and . This shows that our algorithm can indeed generalize better than directly learning high-degree polynomials, which is essentially intractable both statistically and computationally.
It is also possible to prove bounds on scale-sensitive measures of generalization (which are relevant if we care about the prediction values rather than just their sign, e.g. for regression). For example, it is well-known that the expected squared loss can be related to the empirical squared loss over the training data, given a bound on the fat-shattering dimension of the class of functions we are learning . Combining Theorems 11.13 and 14.1 from , it is known that for a class of networks such as those we are learning, the fat-shattering dimension is upper-bounded by the VC dimension of a slightly larger class of networks, which have an additional real input and an additional output node computing a linear threshold function in . Such a class of networks has a similar VC dimension to our original class, hence we can effectively bound the fat-shattering dimension as well.
5 Relation to Kernel Learning
Kernel learning (see e.g. ) has enjoyed immense popularity over the past 15 years, as an efficient and principled way to learn complex, non-linear predictors. A kernel predictor is of the form , where are the training instances, and is a kernel function, which efficiently computes an inner product in a high or infinite-dimensional Hilbert space, to which data is mapped implicitly via the feature mapping . In this section, we discuss some of the interesting relationships between our work and kernel learning.
In kernel learning, a common kernel choice is the polynomial kernel, . It is easy to see that predictors defined via the polynomial kernel correspond to polynomial functions of degree . Moreover, if the Gram matrix (defined as ) is full-rank, any values on the training data can be realized by a kernel predictor: For a desired vector of values , simply find the coefficient vector such that , and note that this implies that for any , . Thus, when our algorithm is ran to completion, our polynomial network can represent the same predictor class as kernel predictors with a polynomial kernel. However, there are some important differences, which can make our system potentially better:
With polynomial kernels, one always has to manipulate an matrix, which requires memory and runtime scaling at least quadratically in . This can be very expensive if is large, and hinders the application of kernel learning to large-scale data. This quadratic dependence on is also true at test time, where we need to explicitly use our training examples for prediction. In contrast, the size of our network can be controlled, and the memory and runtime requirements of our algorithm is only linear in (see Thm. 2). If we get good results with a moderately-sized network, we can train and predict much faster than with kernels. In other words, we get the potential expressiveness of polynomial kernel predictors, but with the ability to control the training and prediction complexity, potentially requiring much less time and memory.
With kernels, one has to specify the degree of the polynomial kernel in advance before training. In contrast, in our network, the degree of the resulting polynomial predictor does not have to be specified in advance - each iteration of our algorithm increases the effective degree, and we stop when satisfactory performance is obtained.
Learning with polynomial kernels corresponds to learning a linear combination over the set of polynomials . In contrast, our network learns (in the output layer) a linear combination of a different set of polynomials, which is constructed in a different, data-dependent way. Thus, our algorithm uses a different and incomparable hypothesis class compared to polynomial kernel learning.
Learning with polynomial kernels can be viewed as a network of a shallow architecture as follows: Each node in the first layer corresponds to one support vector and applies the function . Then, the second layer is a linear combination of the outputs of the first layer. In contrast, we learn a deeper architecture. Some empirical evidence shows that deeper architectures may express complicated functions more compactly than shallow architectures [4, 8].
In this section, we present some preliminary experimental results to demonstrate the feasibility of our approach. The focus here is not to show superiority to existing learning approaches, but rather to illustrate how our approach can match their performance on some benchmarks, using just a couple of parameters and with no manual tuning.
To study our approach, we used the benchmarks and protocol described in  333These datasets and experimental details are publicly available at http://www.iro.umontreal.ca/l̃isa/twiki/bin/view.cgi/Public/DeepVsShallowComparisonICML2007#Downloadable_datasets . These benchmark datasets were designed to test deep learning systems, and require highly non-linear predictors. They consist of datasets, where each instance is a -dimensional vector, representing normalized intensity values of a pixel image. These datasets are as follows:
MNIST-basic: The well-known MNIST digit recognition dataset444http://yann.lecun.com/exdb/mnist, where the goal is to identify handwritten digits in the image.
MNIST-rotated: Same as MNIST-basic, but with the digits randomly rotated.
MNIST-back-image: Same as MNIST-basic, but with patches taken from unrelated real-world images in the background.
MNIST-back-random: Same as MNIST-basic, but with random pixel noise in the background.
MNIST-rotated+back-image: Same as MNIST-back-image, but with the digits randomly rotated.
Rectangles: Given an image of a rectangle, determine whether its height is larger than its width.
Rectangles-images: Same as Rectangles, but with patches taken from unrelated real-world images in the background.
Convex: Given images of various shapes, determine whether they are convex or not.
All datasets consist of 12,000 training instances and 50,000 test instances, except for the Rectangles dataset (1200/50000 train/test instances) and the Convex dataset (8000/50000 train/test instances). We refer the reader to  for more precise details on the construction used.
In , for each dataset and algorithm, the last 2000 examples of the training set was split off and used as a validation sets for parameter tuning (except Rectangles, where it was the last 200 examples). The algorithm was then trained on the entire training set using those parameters, and classification error on the test set was reported.
The algorithms used in 
involved several deep learning systems: Two deep belief net algorithms (DBN-1 and DBN-3), a stacked autoencode algorithm (SAA-3), and a standard single-hidden-layer, feed-forward neural network (NNet). Also, experiments were ran on Support Vector Machines, using an RBF kernel (SVM-RBF) and a polynomial kernel (SVM-Poly).
We experimented with the practical variant of our Basis Learner algorithm (as described in Subsection 3.3), using a simple , publicly-available implementation in MATLAB555http://www.wisdom.weizmann.ac.il/~shamiro/code/BasisLearner.zip. As mentioned earlier in the text, we avoided storing
, instead computing parts of it as the need arose. We followed the same experimental protocol as above, using the same split of the training set and using the validation set for parameter tuning. For the output layer, we used stochastic gradient descent to train a linear classifier, using a standard-regularized hinge loss (or the multiclass hinge loss for multiclass classification). In the intermediate layer construction procedure (BuildBasis), we fixed the batch size to . We tuned the following parameters:
Importantly, we did not need to train a new network for every combination of these values. Instead, for every value of , we simply built the network one layer at a time, each time training an output layer over the layers so far (using the different values of ), and checking the results on a validation set. We deviated from this protocol only in the case of the MNIST-basic dataset, where we allowed ourselves to check additional architectures: The width of the first layer constrained to be , and the other layers are of width ,, or . The reason for this is that MNIST is known to work well with a PCA preprocessing (where the data is projected to a few dozen principal components). Since our first layer also performs a similar type of processing, it seems that a narrow first layer would work well for this dataset, which is indeed what we’ve observed in practice. Without trying these few additional architectures, the test classification error for MNIST-basic is , which is about worse than what is reported below.
We report the test error results (percentages of misclassified test examples) in the table below. Each dataset number corresponds to the numbering of the dataset descriptions above. For each dataset, we report the test error, and in parenthesis indicate the depth/width of the network (where depth corresponds to , so it includes the output layer). For comparison, we also include the test error results reported in  for the other algorithms. Note that most of the MNIST-related datasets correspond to multiclass classification with classes, so any result achieving less than 90% error is non-trivial.
|Dataset No.||SVM-RBF||SVM-Poly||NNet||DBN-3||SAA-3||DBN-1||Basis Learner|
From the results, we see that our algorithm performs quite well, building deep networks of modest size which are competitive with (and for the Convex dataset, even surpasses) the previous reported results. The only exception is the Rectangles dataset (dataset no. 6), which is artificial and very small, and we found it hard to avoid overfitting (the training error was zero, even after tuning ). However, compared to the other deep learning approaches, training our networks required minimal human intervention and modest computational resources. The results are also quite favorable compared to kernel predictors, but the predictors constructed by our algorithm can be stored and evaluated much faster. Recall that a kernel SVM generally requires time and memory proportional to the entire training set in order to compute a single prediction at test time. In contrast, the memory and time requirements of the predictors produced by our algorithm are generally at least orders of magnitudes smaller.
It is also illustrative to consider training/generalization error curves for our algorithm, seeing how the bias/variance trade-off plays out for different parameter choices. We present results for the MNIST-rotated dataset, based on the data gathered in the parameter tuning stage (where the algorithm was trained on the first 10,000 training examples, and tested on a validation set of 2,000 examples). The results for the other datasets are qualitatively similar. We investigate how 3 quantities behave as a function of the network depth and width:
The validation error (for the best choice of regularization parameter in the output layer)
The corresponding training error (for the same choice of )
The lowest training error attained across all choices of
The first quantity shows how well we generalize as a function of the network size, while the third quantity shows how expressive is our predictor class. The second quantity is a hybrid, showing how expressive is our predictor class when the output layer is regularized to avoid too much overfitting.
The behavior of these quantities is presented graphically in Figure 5. First of all, it’s very clear that this dataset requires a non-linear predictor: For a network depth of , the resulting predictor is just a linear classifier, whose train and test errors are around 50% (off-the-charts). Dramatically better results are obtained with deeper networks, which correspond to non-linear predictors. The lowest attainable training error shrinks very quickly, attaining an error of virtually in the larger depths/widths. This accords with our claim that the Basis Learner algorithm is essentially a universal learning algorithm, able to monotonically decrease the training error. A similar decreasing trend also occurs in the training error once is tuned based on the validation set, but the effect of is important here and the training errors are not so small. In contrast, the validation error has a classical unimodal behavior, where the error decreases initially, but as the network continues to increase in size, overfitting starts to kick in.
Finally, we also performed some other experiments to test some of the decisions we made in implementing the Basis Learner approach. In particular:
Choosing the intermediate layer’s connections to be sparse (each node computes the product of only two other nodes) had a crucial effect. For example, we experimented with variants more similar in spirit to the VCA algorithm in , where the columns of are forced to be orthogonal. This translates to adding a general linear transformation between each two layers. However, the variants we tried tended to perform worse, and suffer from overfitting. This may not be surprising, since these linear transformations add a large number of additional parameters, greatly increasing the complexity of the network and the risk of overfitting.
Similarly, performing a linear transformation of the data in the first layer seems to be important. For example, we experimented with an alternative algorithm, which builds the first layer in the same way as the intermediate layers (using single products), and the results were quite inferior. While more experiments are required to explain this, we note that without this linear transformation in the first layer, the resulting predictor can only represent polynomials with a modest number of monomials (see Remark 3). Moreover, the monomials tend to be very sparse on sparse data.
As mentioned earlier, the algorithm still performed well when the exact SVD computation in the first layer construction was replaced by an approximate randomized SVD computation (as in ). This is useful in handling large datasets, where an exact SVD may be computationally expensive.
We end by emphasizing that these experimental results are preliminary, and that much more work remains to further study the new learning approach that we introduce here, both theoretically and experimentally.
This research was funded in part by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI).
-  J. Abbott, A. M. Bigatti, M. Kreuzer, and L. Robbiano. Computing ideals of points. J. Symb. Comput., 30(4):341–356, 2000.
-  M. Anthony and P. Bartlett. Neural Network Learning - Theoretical Foundations. Cambridge University Press, 2002.
-  S. Ben-David and M. Lindenbaum. Localization vs. identification of semi-algebraic sets. Machine Learning, 32(3):207–224, 1998.
-  Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127, 2009.
-  Y. Bengio and Y. LeCun. Scaling learning algorithms towards ai. Large-Scale Kernel Machines, 34, 2007.
-  S. Chen, S.A. Billings, and W. Luo. Orthogonal least squares methods and their application to non-linear system identification. International Journal of Control, 50:1873–1896, 1989.
R. Collobert and J. Weston.
A unified architecture for natural language processing: deep neural networks with multitask learning.In ICML, 2008.
-  O. Delalleau and Y. Bengio. Shallow vs. deep sum-product networks. In NIPS, 2011.
-  M. Gasca and T. Sauer. Polynomial interpolation in several variables. Adv. Comput. Math., 12(4):377–410, 2000.
-  M. G.Marinari, H. M. Möller, and T. Mora. Gröbner bases of ideals defined by functionals with an application to ideals of projective points. Appl. Algebra Eng. Commun. Comput., 4:103–145, 1993.
-  N. Halko, P. Martinsson, and J. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011.
-  G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006.
-  H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In ICML, 2007. Available at http://www.iro.umontreal.ca/l̃isa/twiki/bin/view.cgi/Public/DeepVsShallowComparisonICML2007#Downloadable_datasets.
Q. V. Le, M.-A. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and
A. Y. Ng.
Building high-level features using large scale unsupervised learning.In ICML, 2012.
-  Y. LeCun and Y. Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361, 1995.
H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng.
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.In ICML, 2009.
-  R. Livni, D. Lehavi, S. Schein, H. Nachlieli, S. Shalev-Shwartz, and A. Globerson. Vanishing component analysis. In ICML, 2013.
-  H. Poon and P. Domingos. Sum-product networks: A new deep architecture. In UAI, 2011.
M.A. Ranzato, F.J. Huang, Y.L. Boureau, and Y. Lecun.
Unsupervised learning of invariant feature hierarchies with
applications to object recognition.
Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, pages 1–8. IEEE, 2007.
-  D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning representations by back-propagating errors. Cognitive modeling, 1:213, 2002.
-  B. Schölkopf and A.J. Smola. Learning with kernels: support vector machines, regularization, optimization and beyond. MIT Press, 2002.
-  D. A Spielman and S.-H. Teng. Smoothed analysis: an attempt to explain the behavior of algorithms in practice. Communications of the ACM, 52(10):76–84, 2009.