## 1 Introduction

The continually rapid growth in data acquisition and data updating has recently posed crucial challenges to the machine learning community on developing learning schemes to match or outperform human learning capability. Fortunately, the introduction of deep learning (see, for example, [21]

) has led to the feasibility of getting around the bottleneck of classical learning strategies, such as the support vector machine and boosting algorithms, based on classical neural networks (see, for example,

[31, 17, 11, 6]), by demonstrating remarkable successes in many applications, particularly computer vision [25] and speech recognition [27], and more currently in other areas, including: natural language processing, medical diagnosis and bioinformatics, financial market analysis and online advertisement, time series forecasting and search engines. Furthermore, the exciting recent advances of deep learning schemes for such applications have motivated the current interest in re-visiting the development of classical neural networks (to be called ”shallow nets” in later discussions), by allowing multiple hidden layers between the input and output layers. Such neural networks are called ”deep” neural nets, or simply, deep nets (DN). Indeed, the advantages of DN’s over shallow nets, at least in applications, have led to various popular research directions in the academic communities of Approximation Theory and Learning Theory. Explicit results on the existence of functions, that are expressible by DN’s but cannot be approximated by shallow nets with comparable number of parameters, are generally regarded as powerful features of the advantage of DN’s in Approximation Theory. The first theoretical understanding of such results dates back to our early work [7], where by using the Heaviside activation function, it was shown that DN’s with two hidden layers already provide localized approximation, while shallow nets fail. Later explicit results on DN approximation

[14, 37, 44, 40, 39] further reveal other various advantages of DN’s over shallow nets.From approximation to learning, the tug of war between bias and variance [10]

indicates that explicit derivation of DN’s is insufficient to show its success in machine learning, in that besides bias, the capacity of DN should possess the expressivity of embodying variance. In this direction, the capacity of DN’s, as measured by the number of linear regions, Betty number, neuron transitions, and DN trajectory length were studied in

[38], [3], and [40] respectively, in showing that DN’s allow for many more functionalities than shallow nets. Although these results certainly show the benefits of deep nets, yet they pose more difficulties in analyzing the deep learning performance, since large capacity usually implies large variance and requires more elaborate learning algorithms. One of the main difficulties is development of satisfactory learning rate analysis for DN learning, that has been well studied for shallow nets (see, for example, [34]). In this paper, we present an analysis of the advantages of DN’s in the framework of learning theory [10], taking into account the trade-off between bias and variance.Our starting point is to assume that the samples are located approximately on some unknown manifold in the sample (-dimensional Euclidean) space. For simplicity, consider the set of inputs of samples: , with a corresponding set of outputs: for some positive number , where is an unknown data-dependent -dimensional connected Riemannian manifold (without boundary). We will call the sample set, and construct a DN with three hidden layers, with the first for the dimensionality-reduction, the second for bias-reduction, and the third for variance-reduction. The main tools for our construction are the “local manifold learning” for deep nets in [9], “localized approximation” for deep nets in [7], and “local average” in [19]. We will also introduce a feedback procedure to eliminate outliers during the learning process. Our constructions justify the common consensus that deep nets are intuitively capable of capturing data features via their architectural structures [2]. In addition, we will prove that the constructed DN can well approximate the so-called regression function [10] within the accuracy of in expectation, where denotes the order of smoothness (or regularity) of the regression function. Noting that the best existing learning rates of the shallow nets are [34] and [46], we observe the power of deep nets over shallow nets, at least theoretically, in the framework of Learning Theory.

The organization of this paper is as follows. In the next section, we present a detailed construction of the proposed deep net. The main results of the paper will be stated in Section 3, where tight learning rates of the constructed deep net are also deduced. Discussions of our contributions along with comparison with some related work and proofs of the main results will be presented in Section 4 and 5, respectively.

## 2 Construction of Deep Nets

In this section, we present a construction of deep neural networks (called deep nets, for simplicity) with three hidden layers to realize certain deep learning algorithms, by applying the mathematical tools of localized approximation in [7], local manifold learning in [9], and local average arguments in [19]. Throughout this paper, we will consider only two activation functions: the Heaviside function and the square-rectifier , where the standard notation is used to define , for any non-negative integer .

### 2.1 Localized approximation and localized manifold learning

Performance comparison between deep nets and shallow nets is a classical topic in Approximation Theory. It is well-known from numerous publications (see, for example, [7, 14, 40, 44]) that various functions can be well approximated by deep nets but not by any shallow net with the same order of magnitude in the numbers of neurons. In particular, it was proved in [7] that deep nets can provide localized approximation, while shallow nets fail.

For and an arbitrary , where , let with . For and , let us denote by , the cube in with center and width . Furthermore, we define by

(1) |

In what follows, the standard notion of the indicator function of a set (or an event) will be used. For , since

we observe that

This implies that as introduced in (1), is the indicator function of the cube . Thus, the following proposition which describes the localized approximation property of , can be easily deduced by applying Theorem 2.3 in [7].

###### Proposition 1

Let be arbitrarily given. Then for all .

On the other hand, it was proposed in [12, 1] with practical arguments, that deep nets can tackle data in highly-curved manifolds, while any shallow net fails. These arguments were theoretically verified in [41, 9], with the implication that adding hidden layers to shallow nets should enable the neural networks to have the capability of processing massive data in a high-dimensional space from samples in lower dimensional manifolds. More precisely, it follows from [13, 41] that for a lower -dimensional connected and compact Riemannian submanifold (without boundary), isometrically embedded in and endowed with the geodesic distance , there exists some , such that for any , with ,

(2) |

where for any , denotes, as usual, the Euclidean norm of . In the following, let , , and denote the closed geodesic ball, the -dimensional Euclidean ball, and the -dimensional Euclidean ball, with center at , respectively, and with radius . Then the following proposition is a brief summary of Theorem 2.2, Theorem 2.3 and Remark 2.1 in [9], with the implication that neural network can be used as a dimensionality-reduction tool.

###### Proposition 2

For each , there exist a positive number and a neural network

with

(3) |

that maps diffeomorphically onto and satisfies

(4) |

for some .

### 2.2 Learning via deep nets

Our construction of deep nets depends on the localized approximation and dimensionality-reduction technique, as presented in Propositions 1 and 2. To describe the learning process, firstly select a suitable , so that for every , there exists some point in a finite set that satisfies

(5) |

To this end, we need a constant , such that

(6) |

The existence of such a constant is proved in the literature (see, for example, [46]). Also, in view of the compactness of , since is an open covering of , there exists a finite set of points , such that Hence, may be chosen to satisfy

(7) |

With this choice, we claim that (5) holds. Indeed, if , then (5) obviously holds for any choice of . On the other hand, if , then from the inclusion property , it follows that there is some , depending on , such that

(8) |

Next, let . By (6), we have, for any ,

Therefore, it follows from (7) that

This implies that and verifies our claim (5) with the choice of .

Observe that for every we may choose the point to define by setting

(9) |

and apply Proposition 2, (5), and (3) to obtain the following.

###### Proposition 3

For each , maps diffeomorphically into and

(10) |

where and .

As a result of Propositions 1 and 3, we now present the construction of the deep nets for the proposed learning purpose. Start with selecting points , and , with , where in . Denote and . In view of Proposition 3, it follows that is well defined, , and We also define by

(11) | |||

Then the desired deep net estimator with three hidden layers may be defined by

(12) |

where we set if the denominator is zero.

Observe that in the above construction there is a totality of three hidden layers to perform three separate tasks, namely: the task of the first hidden layer is to reduce the dimension of the input space, while the second and third hidden layers are to perform localized approximation on and data variance reduction by applying local averaging [19], respectively.

### 2.3 Fine-tuning

For each , it follows from that there is some , such that , which implies that . For each , since is a cube in , the cardinality of the set is at most . Also, because for each , there exists some , such that , implying that and that the number of such integers is bounded by . For each , we consider a non-empty subset

(13) |

of , with cardinality

(14) |

Also, for each , we further define , as well as

(15) |

and

(16) |

Then it follows from (15) and (16) that and it is easy to see that if each is an interior point of some , then . In this way, is some local average estimator. However, if , (and this is possible when some lies on the boundary of for some ), then the estimator (12) might perform badly, and this happens even for training data. Note that to predict some , which is an interior point of , we have

which is much smaller than when . The reason is that there are only summations in the numerator. Noting that the Riemannian measure of the boundary of is zero, we consider the above phenomenon as outliers.

Fine-tuning, often referred to as feedback in the literature of deep learning [2], can essentially improve the learning performance of deep nets [26]. We observe that fine-tuning can also be applied to avoid outliers for our constructed deep net in (12), by counting the cardinalities of and . In the training processing, besides computing for some query point , we may also record and . If the estimator is not big enough, we propose to add the factor to . In this way, the deep net estimator with feedback can be mathematically represented by

(17) |

where is defined by

and as before, we set if the denominator vanishes.

## 3 Learning Rate Analysis

We consider a standard regression setting in learning theory [10] and assume that the sample set of size

is drawn independently according to some Borel probability measure

on . The regression function is then defined bywhere denotes the conditional distribution at induced by . Let be the marginal distribution of on and be the Hilbert space of square-integrable functions with respect to on . Our goal is to estimate the distance between the output function and the regression function measured by , as well as the distance between and .

We say that a function on is -Lipschitz (continuous) with positive exponent and constant , if

(18) |

and denote by , the family of all Lipschitz functions that satisfy (18). Our error analysis of will be carried out based on the following two assumptions.

###### Assumption 1

There exist an and a constant such that .

This smoothness assumption is standard in learning theory for the study of approximation for regression (see, for example, and[19, 23, 34, 10, 45, 42, 22, 16, 20, 5, 4, 29]).

###### Assumption 2

is continuous with respect to the geodesic distance of the Riemannian manifold.

Note that Assumption 2, which is about the geometrical structure of , is slightly weaker than the distortion assumption in [49, 43] but somewhat similar to the assumption considered in [35]. The objective of this assumption is for describing the functionality of fine-tuning.

We are now ready to state the main results of this paper. In the first theorem below, we obtained an upper bound of learning rate for the constructed deep nets .

###### Theorem 1

Observe that Theorem 1 provides fast learning rate for the constructed deep net which depends on manifold dimension instead of the sample space dimension . In the second theorem below, we show the necessity of the fine-tuning process as presented in (17), when Assumption 2 is removed.

###### Theorem 2

## 4 Related Work and Discussions

The success in practical applications, especially in the fields of computer vision [25] and speech recognition [27], has triggered enormous research activities on deep learning. Several other encouraging results, such as object recognition [12], unsupervised training [15]

, and artificial intelligence architecture

[2], have been obtained to demonstrate the significance of deep learning. We refer the interested readers to the 2016 MIT monograph, “Deep Learning” [18], by Goodfellow, Bengjio and Courville, for further study of this exciting subject, which is only at the infancy of its development.Indeed, deep learning has already created several challenges to the machine learning community. Among the main challenges are to show the necessity of the usage of deep nets and to theoretically justify the advantages of deep nets over shallow nets. This is essentially a classical topic in Approximation Theory. In particular, dating back to the early 1990’s, it was already proved that deep nets can provide localized approximation but shallow nets fail (see, for example, [7]). Furthermore, it was also shown that deep nets provide high approximation orders, that are certainly not restricted by the lower error bounds for shallow nets (see [8, 33]). More recently, stimulated by the avid enthusiasm of deep learning, numerous advantages of deep nets were also revealed from the point of view of function approximation. In particular, certain functions discussed in [14] can be represented by deep nets but cannot be approximated by shallow nets; it was shown in [37] that deep nets, but not shallow nets, can approximate composition of functions; it was exhibited in[39] that deep nets can avoid the curse of dimension of shallow nets; a probability argument was given in [30] to show that deep nets have better approximation performance than shallow nets with high confidence; it was demonstrated in [41, 9] that deep nets can improve the approximation capability of shallow nets when the data are located on data-dependent manifolds; and so on. All of these results give theoretical explanations of the significance of deep nets from the Approximation Theory point of view.

As a departure from the work mentioned above, our present paper is devoted to explore better performance of deep nets over shallow nets in the framework of Leaning Theory. In particular, we are concerned not only with the approximation accuracy but also with the cost to attain such accuracy. In this regard, learning rates of certain deep nets have been analyzed in [23], in which Kohler and Krzyżak provided certain near-optimal learning rates for a fairly complex regularization scheme, with the hypothesis space being the family of deep nets with two hidden layers proposed in [36]. More precisely, they derived a learning rate of order for functions . This is close to the optimal learning rate of shallow nets in [34], different only by a logarithmic factor. Hence, the study in [23] theoretically showed that deep nets at least do not downgrade the learning performance of shallow nets. In comparison with [23], our study is focussed on answering the question: ”What is to be gained by deep learning?” The deep net constructed in our paper possesses a learning rate of order , when is an unknown -dimensional connected Riemannian manifold (without boundary). This rate is the same as the optimal learning rate [19, Chapeter 3] for special case of the cube under a similar condition, though it is smaller than the optimal learning rates for shallow nets [34]. Another line of related work is [46, 47], where Ye and Zhou deduced learning rates for regularized least-squares over shallow nets for the same setting of our paper. They derived a learning rate of , which is slower than the rate established in our paper. It should be mentioned that in a more recent work [24], some advantages of deep nets are revealed from the learning theory viewpoint. However, the results in [24] requires a hierarchical interaction structure, which is totally different from what is presented in our present paper.

Due to the high degree of freedom for deep nets, the number and type of parameters for deep nets are much more than those of shallow nets. Thus, it should be of great interest to develop scalable algorithms to reduce the computational burdens of deep learning. Distributed learning based on a divide-and-conquer strategy

[48, 28] could be a fruitful approach for this purpose. It is also of interest to establish results similar to those of Theorem 2 and Theorem 1 for deep nets, but with rectifier neurons, by using the rectifier (or ramp) function, , as activation. The reason is that the rectifier is one of the most widely used activations in the literature on deep learning. Our research in these directions is postponed to a later work.## 5 Proofs of the main results

To facilitate our proofs of the theorems stated in Section 3, we first establish the following two lemmas.

Observe from Proposition 1 and the definition (11) of the function that

(21) |

For , define a random function in term of the random sample by

(22) |

so that

(23) |

Comments

There are no comments yet.