# A New Perspective on Machine Learning: How to do Perfect Supervised Learning

In this work, we introduce the concept of bandlimiting into the theory of machine learning because all physical processes are bandlimited by nature, including real-world machine learning tasks. After the bandlimiting constraint is taken into account, our theoretical analysis has shown that all practical machine learning tasks are asymptotically solvable in a perfect sense. Furthermore, the key towards this solvability almost solely relies on two factors: i) a sufficiently large amount of training samples beyond a threshold determined by a difficulty measurement of the underlying task; ii) a sufficiently complex model that is properly bandlimited. Moreover, for unimodal data distributions, we have derived a new error bound for perfect learning, which can quantify the difficulty of learning. This case-specific bound is much tighter than the uniform bounds in conventional learning theory.

## Authors

• 35 publications
• ### The Learnability of Unknown Quantum Measurements

Quantum machine learning has received significant attention in recent ye...
01/03/2015 ∙ by Hao-Chung Cheng, et al. ∙ 0

• ### Collaborative Machine Learning Markets with Data-Replication-Robust Payments

We study the problem of collaborative machine learning markets where mul...
11/08/2019 ∙ by Olga Ohrimenko, et al. ∙ 0

• ### Jensen-Shannon Information Based Characterization of the Generalization Error of Learning Algorithms

Generalization error bounds are critical to understanding the performanc...
10/23/2020 ∙ by Gholamali Aminian, et al. ∙ 49

• ### Using Experts' Opinions in Machine Learning Tasks

08/10/2020 ∙ by Amir Fazelinia, et al. ∙ 0

• ### A Generalization Theory based on Independent and Task-Identically Distributed Assumption

Existing generalization theories analyze the generalization performance ...
11/28/2019 ∙ by Guanhua Zheng, et al. ∙ 0

• ### Generalization Analysis for Game-Theoretic Machine Learning

For Internet applications like sponsored search, cautions need to be tak...
10/09/2014 ∙ by Haifang Li, et al. ∙ 0

• ### Debugging Machine Learning Tasks

Unlike traditional programs (such as operating systems or word processor...
03/23/2016 ∙ by Aleksandar Chakarov, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The fundamental principles and theories for machine learning (ML) were established a few decades ago, such as the No-Free-Lunch theory (Wolpert, 1995)

(Vapnik, 2000)

, and probably approximately correct (PAC) learning

(Valiant, 1984)

. These theoretical works have successfully explained which problems are learnable and how to achieve effective learning in principle. On the other hand, since the boom of deep learning in the past decade, the landscape of machine learning practices has changed dramatically. A variety of artificial neural networks (ANN) have been successfully applied to all sorts of real-world applications, ranging from speech recognition and image classification to machine translation. The list of success stories in many diverse application domains is still growing year after year. The superhuman performance has even been claimed in some tasks, which were originally thought to be very hard. The divergence between the theories and the practices has equally puzzled both ML theorists and ML practitioners. At this point, we desperately need to answer a series of serious questions in order to further advance the field as a whole. For instance, why do the ANN-type models significantly overtake other existing ML methods on all of these practical applications? What is the essence to the success of the ANN-type models on these ostensibly challenging tasks? Where is the limit of these ANN-type models? Why does horrific overfitting, as predicted by the current ML theory, never happen in these real-world tasks even when some shockingly huge models are used?

(Zhang et al., 2016)

In this paper, we develop a new ML theory to shed some light on these questions. The key to our new theory is the concept of bandlimiting. Not all processes may actually exist in the real world and all physically realizable processes must be bandlimited. Much of the previous efforts in machine learning theory have been spent in studying some extremely difficult problems that are over-generalized in theory but may not actually exist in practice. After the bandlimiting constraint is taken into account, our theoretical analysis has shown that all practical machine learning tasks are asymptotically solvable in a perfect sense. Our theoretical results suggest that the roadmap towards successful supervised learning consists of several steps: (a) collecting sufficient labelled in-domain data; (b) fitting a complex and bandlimited model to the large training set. The amount of data needed for perfect learning depends on the difficulty of each underlying task. For some special cases, we have derived some new error bounds to quantitatively measure the difficulty of learning. As the amount of training data grows, we need a complicated model to complement step (b). The universal approximation theory in (Cybenko, 1989; Hornik, 1991) makes neural networks an ideal candidate for perfect learning since similar model structures can be fitted to any large training set if we keep increasing the model size. The highly-criticized engineering tricks used in the training of neural networks are just some empirical approaches to ensure that a complicated model is effectively fit to a very large training set in step (b) (Jiang, 2019). However, there is no evidence to show that neural networks are the only possible models that are able to achieve perfect learning.

## 2 Problem Formulation

In this work, we study the standard supervised learning problem in machine learning. Given a finite training set of samples of input and output pairs, denoted as , the goal is to learn a model from input to output over the entire feature space: , which will be used to predict future inputs.

### 2.1 Machine Learning as Stochastic Function Fitting

Instead of starting our analysis from the joint probabilistic distribution of inputs and outputs as in normal statistical learning theory, we adopt a more restricted formulation in this paper. Here, we assume all inputs

are random variables following a probabilistic density function, i.e.

, in the input feature space (without losing generality, we may assume ). The relation between input () and output () is deterministic, which may be represented by a function , denoted as the target function . In this setting, the goal of machine learning is to learn a model to minimize the expected error between and as measured by .

Most of interesting and meaningful learning problems in the real world can be easily accommodated by the above deterministic function between inputs and outputs. For example, we may define if the conditional distribution is sharp and unimodal. If is sharp but not unimodal, we may decompose the learning problem into several sub-problems, each of which is represented by one deterministic function as above. If is not sharp, it means that the relation between inputs and outputs are fairly weak. In these cases, either it is usually not a meaningful learning problem in practice, or we may improve input features to further enhance the relation between and .

### 2.2 The Bandlimiting Property

In engineering, it is a well-known fact that all physically realizable processes must satisfy the so-called bandlimiting property. Bandlimiting is a strong constraint imposed on the smoothness and growth of functions, which corresponds to the mathematic concept of finite exponent type of entire functions in mathematical analysis (Levin, 1964; Levinson, 1940). As shown in Figures 4 to 4, several 1-D functions with different bandlimiting constraints are plotted as an illustration, which clearly show that the various bandlimiting contraints heavily affect the smoothness of a function.

In practice, if a supervised learning problem arises from a real-world task or a physical process, the above target function will satisfy the bandlimiting property as constrained by the physical world. The central idea in this paper is to demonstrate that the bandlimiting property, largely overlooked by the machine learning community in the past, is essential in explaining why real-world machine learning problems are not as hard as speculated by statistical learning theory (Vapnik, 2000; Shalev-Shwartz & Ben-David, 2014). The theory proposed in this paper further suggests that under certain conditions we may even solve many real-world supervised learning problems perfectly.

First of all, let’s give the definition of bandlimiting111Also known as wavenumber-limited. Here, we prefer the term “bandlimiting” as it is better known in engineering. . A function

is called to be strictly bandlimited if its multivariate Fourier transform

(Stein & Weiss, 1971), , vanishes to zero beyond a certain finite spatial frequency range. If there exists , such that

 F(ω)=∫⋯∫+∞−∞f(x)e−ix⋅ωdx=0if∥ω∥>B, (1)

then is called a strictly bandlimited function by .

Similarly, we may define a function is approximately bandlimited if its Fourier transform satisfies:

 limB→+∞∫⋯∫∥ω∥>B∥F(ω)∥2dω=0.

In other words, for any arbitrary small , , the out-of-band residual energy satisfies

 ∫⋯∫∥ω∥>Bϵ∥F(ω)∥2dω<ϵ2 (2)

where is called the approximate band of at .

## 3 Perfect Learning

In supervised learning, we are interested in learning the unknown target function based on a finite training set of samples of input and output pairs: where each pair ( ) is an i.i.d. sample and is randomly drawn from an unknown p.d.f , i.e. and . The central issue in supervised learning is how to learn a model from the given training set , denoted as , in order to minimize the so-called expected risk, , defined over the entire feature space in the sense of mean squared error (MSE):

 R(^f|DN) = Ep(x)[∥^f(x|DN)−f(x)∥2] (3) =

Usually the above expected risk is not practically achievable since it requires two unknown functions, and . Supervised learning methods instead focus on learning a model to optimize the so-called empirical risk, computed solely on the given training samples as follows:

 Remp(^f|DN)=1NN∑i=1∥∥yi−^f(xi|DN)∥∥2 (4)

Here we use MSE for mathematic simplicity but our analysis is equally applicable to both regression and classification problems. We know that the unknown expected risk is linked to the above empirical risk by uniform bounds in the VC theory (Vapnik, 2000) for classification. In machine learning, it is common practice to apply some sort of regularization to ensure these two quantities will not diverge in the learning process to avoid the so-called overfitting.

### 3.1 Existence of Perfect Learning

In this work, we define perfect supervised learning as an ideal scenario where we can always learn a model from a finite set of training samples as above to achieve not only zero empirical risk but also zero expected risk. Here, we will theoretically prove that perfect supervised learning is actually achievable if the underlying target function is bandlimited and the training set is sufficiently large.

###### Theorem 1

(existence) In the above supervised learning setting, if the target function is strictly or approximately bandlimited, given a sufficiently large training set as above, then there exists a method to learn a model (or construct a function) solely from , not only leading to zero empirical risk

 Remp(^f|DN)=0

but also yielding zero expected risk in probability

 R(^f|DN)P⟶0

as .

Proof sketch: The idea is similar to the multidimensional sampling theorem (Petersen & Middleton, 1962), stating that a bandlimited signal may be fully represented by infinite uniform or non-uniform samples as long as these samples are dense enough (Marvasti & et.al., 2001). In our case, we attempt to recover the function

from the samples randomly drawn according to a probability distribution. Obviously, as

, they will surely satisfy any density requirement determined by the band of . Moreover, we will show that the truncation error from infinite samples to finite samples is negligible and will vanish when . See the full proof in Appendix A.

This result may theoretically explain many recent successful stories in machine learning. As long as a learning task arises from any real-world application, no matter whether it is related to speech, vision, language or others, it is surely bounded by the bandlimitedness property in the physical world. As long as we are able to collect enough samples, these problems will be solved almost perfectly by simply fitting a complex model to these samples in a good way. The primary reason for these successes may be attributed to the fact that these real-world learning problems are not as hard as they were initially thought to be. At a glance, these problems are regarded to be extremely challenging due to the involved dimensionality and complexity. However, the underlying processes may in fact be heavily bandlimited by some incredibly small values.

On the other hand, it is impossible to achieve perfect learning if the target function is not bandlimited.

###### Corollary 1

If is not strictly nor approximately bandlimited, no matter how many training samples to use, of all realizable learning algorithms have a nonzero lower-bound:

 limN→∞R(^f|DN)≥ε>0.

### 3.2 Non-asymptotic Analysis

The previous section gave some results on the asymptotic behaviour of perfect supervised learning when . Here, let us consider some non-asymptotic analyses to indicate how hard a learning problem may be when is finite. Given any one training set of i.i.d. samples , we may learn a model, denoted as from . If is finite, when we select different training sets of samples, the same learning algorithm may end up with a different result each time. In this case, the learning performance should be measured by the mean expected risk averaged with respect to :

 RN=EDN[Ep(x)[∥^f(x|DN)−f(x)∥2]] (5)

#### 3.2.1 Strictly Bandlimited Target Functions

For any finite and strictly bandlimited target functions , we first consider a simple case, where

follows an isotropic covariance Gaussian distribution. We have the following result to upper-bound the above mean expected risk in eq. (

5) for the perfect learning algorithm:

###### Theorem 2

If we have , the target function is strictly bandlimited by , the mean expected risk in eq.(5) of the perfect learner is upper bounded as follows:

 R∗N<[(√2KBσ)n+1⋅H√(n+1)!]2 (6)

where is the dimension of , and is the maximum value of .

Proof sketch: Based on the given samples, assume a model is learned as multivariate Taylor polynomials of up to certain order , which exactly has free coefficients. This error bound may be derived based on the remainder error in the multivariate Taylor’s theorem. See the full proof in Appendix B.

This bound in Theorem 2 serves as a general indicator for how hard a learning problem is. It also suggests that learning is fairly easy when the target function is bandlimited by a finite range , where the mean expected risk of a good learning algorithm may converge exponentially to 0 as (when ). When is relatively small, the difficulty of the learning problem is well-reflected by the quantity of . is the dimensionality of the underlying problems: it is not necessarily equal to the dimensionality of the raw data since those dimensions are highly correlated, and it may represent the dimensionality of the independent features in a much lower de-correlated space. Note that also affects the convergence rate of learning since . Generally speaking, the larger the value is, the more difficult the learning task will be and the more training samples are needed to achieve good performance. For the same number of samples from the same data distribution , it is easier to learn a narrowly-banded function than a widely-banded one. On the other hand, in order to learn the same target function using the same number of samples, it is much easier to learn in the cases where the data distribution is heavily concentrated in the space than those where the data is wildly scattered.

Moreover, we can easily extend Theorem 2 to diagonal covariance matrices.

###### Corollary 2

If follows a multivariate Gaussian distribution with zero mean and diagonal covariance matrix, , with , and the target function is bandlimited by different values () for various dimensions of , we have

 R∗N<1KK∑k=1[(√2KBkσk)n+1⋅H√(n+1)!]2 (7)

In this case, different dimensions may contribute to the difficulty of learning in a different way. In some high dimensional problems, many dimensions may not affect the learning too much if the values of are negligible in these dimensions.

At last, we give a fairly general case for strictly bandlimited functions . Assume is constrained in a bounded region in , we may normalize all within a hypercube, denoted as .

###### Corollary 3

If follows any distribution within a hypercube , and the target function is bandlimited by , the perfect learner is upper-bounded as:

 R∗N<[(KBU)n+1⋅H(n+1)!]2 (8)

#### 3.2.2 Approximately Bandlimited Target Functions

Assume the target function is not strictly bandlimited by any fixed value , but approximately bandlimited as in eq.(2). Here, we consider how to compute the expected error for a given training set of samples, i.e., . In this case, for any arbitrarily small , we may have an approximate band to decompose the original function into two parts: , where is strictly bandlimited by and contains the residual out of the band. As shown in eq.(2), we have , where is the Fourier transform of the residual function .

If , following Theorem 2 and Parseval’s identity, we have

 R∗N<[(√2KBϵσ)n+1⋅H√(n+1)!]2+ϵ2

where the second term is the so-called aliasing error. For any given problem setting, if we decrease , the first term becomes larger since is larger. Therefore, we can always vary to look for the optimal to further tighten the bound on the right hand side of the above equation as:

## 4 Conditions of Perfect Learning

Here we study under what conditions we may achieve the perfect learning in practice. First of all, the target function must be bandlimited, i.e., all training data are generated from a bandlimited process. Secondly, when we learn a model from a class of strictly or approximately bandlimited functions, if the learned model achieves the zero empirical risk on a sufficiently large training set, then the learned model is guaranteed to yield zero expected risk for sure. In other words, under the condition of bandlimitedness, the learned model will naturally generalize to the entire space if it fits to a sufficiently larget training set.

###### Theorem 3

(sufficient condition) If the target function is strictly or approximately bandlimited, assume a strictly or approximately bandlimited model, , is learned from a sufficiently large training set . If this model yields zero empirical risk on :

 Remp(^f|DN)=0,

then it is guaranteed to yield zero expected risk:

 R(^f|DN)⟶0

as .

Proof sketch: If and are bandlimited, each of them may be represented as an infinite sum of diminishing terms. If a bandlimited model is fit to a bandlimited target function based on training samples, it ensures that the most significant terms of are learned up to a good precision. As , the learned model will surely converge to the target function . See the full proof in Appendix C.

This theorem gives a fairly strong condition for generalization in practical machine learning scenerios. In practice, all real data are generated from a bandlimited target function. If we use a bandlimited model to fit to a large enough training set, the generalization of the learned model is guaranteed asymptotically by itself. Under some minor conditions, namely the input and all model parameters are bounded, it is easy to show that all continuous models are at least approximately bandlimited, including most PAC-learnable models widely used in machine learning, such as linear models, neural networks, etc. In these cases, perfect learning mostly rely on whether we can perfectly fit the model to the given large training set. In our analysis, model complexity is viewed as an essence towards the success of learning because complex models are usually needed to fit to a large training set. Our theorems show that model complexity does not impair the capability to learn as long as the complex models satisfy the bandlimitedness requirement. Bandlimitedness is a model characteristic orthogonal to model complexity (which is reflected by the number of free parameters). We may have a simple model that has an unlimited spatial frequency band. On the other hand, it is possible to have a very complex model which is strongly bandlimited by a small value 222See more explanation in paragraph 4 of Appendix B. On the other hand, the traditional statistical learning theory leads to fairly loose bounds for simple models and completely fails to explain complex models due to the huge or even infinite VC dimensions.

This theorem will help to explain the generalization magic of neural networks recently observed in the deep learning community (Zhang et al., 2016). As discussed above, when the input and all model parameters of any neural network are bounded, we may normalize the input into a hypercube , in this case, the function represented by a neural network belong to the function class . According to the Riemann-Lebesgue lemma (Pinsky, 2002), the Fourier transform of any function in decays when the absolute value of any frequency component goes up. Therefore, any neural network is essentially an approximately bandlimited model. Based on the Theorem 3, we can easily derive the following corollary.

###### Corollary 4

Assume a neural network, , is learned from a sufficiently large training set , generated by a bandlimited process , and the input and all model parameters of the neural network are bounded. If the neural network yields zero empirical risk on :

 Remp(~f|DN)=0,

then it surely yields zero expected risk as :

 limN→∞R(~f|DN)⟶0.

## 5 Equivalence of Perfect Learning

###### Theorem 4

(equivalence) Assume that the target function is strictly or approximately bandlimited and any two bandlimited (either strictly or approximately) models, and , are learned from a sufficiently large training set . If both models yield zero empirical risk on :

 Remp(^f1|DN)=Remp(^f2|DN)=0,

then and are asymptotically identical under as :

 limN→∞∫⋯∫+∞−∞∥∥^f1(x)−^f2(x)∥∥2p(x)dx⟶0.

Proof sketch: According to the uniqueness theorem in mathematical analysis (Levin, 1964) , as long as the sampled points are dense enough in the space, there exists a unique bandlimited function that may exactly pass through all of these samples. See the full proof in Appendix D.

This result suggests that we may use many different models to solve a real-world machine learning problem. As long as these models are powerful enough to act as a universal approximator to fit well to any given large training set, they are essentially equivalent as long as they reveal the bandlimiting behaviour, no matter whether you use a recurrent or nonrecurrent structure, use 50 layers or 100 layers in the model, etc. The key is how to apply the heavy engineering tricks to fine-tune the learning process to ensure that the complicated learned models fit well to the large training set.

## 6 Non-ideal Cases with Noises

In this work, we mainly focus on the ideal learning scenarios where no noise is involved in the learning process. In practice, the collected training samples are inevitably corrupted by all sorts of noises. For example, both inputs, , and outputs, , of the target function may be corrupted by some independent noise sources. These noise sources may have wider or even unlimited band. Obviously, these independent noises will impair the learning process. However, the above perfect learning theory can be extended to deal with noises. These cases will be further explored as our future work.

## 7 Final Remarks

In this paper, we have presented some theoretical results to explain the success of large-scale supervised learning. This success is largely attributed to the fact that these real-world tasks are not as hard as we originally thought because they all arise from real physical processes that are bounded by the bandlimiting property. Even though all bandlimited supervised learning problems in the real world are asymptotically solvable in theory, we may not afford to collect sufficient training data to solve some of them in near future if they have a very high level of difficulty as determined by the band limit and the data distribution. It is an interesting question on how to predict such difficulty measures for real-world tasks. Another interesting problem is how to explicitly bandlimit the models during the learning process. This issue may be critical to achieve effective learning when the training set is not large enough to ensure the asymptotic generalization suggested in Theorem 3. We conjecture that all regularization tricks widely used in machine learning may be unified under the idea of bandlimiting models.

Appendix

## Appendix A Proof of Theorem 1 (existence)

Here we give the full proof of Theorem 1 regarding the existence of perfect supervised learning.

Proof: First of all, since is a p.d.f. in , for any arbitrarily small number , it is always possible to find a bounded region in , denoted as (), to ensure that the total probability mass outside is smaller than :

Secondly, since is bandlimited by a finite , we may partition the entire space into a equally-spaced criss-cross grid formed from all dimensions of . The grid is evenly separated by in each dimension. This uniform grid partitions the whole space, . According to high-dimensional sampling theorem (Petersen & Middleton, 1962), if we sample the function at all mesh points in the grid, the entire function can be fully restored. Moreover, the non-uniform sampling results in (Yen, 1956; Marvasti & et.al., 2001) allows us to fully restore the function not just from the exact samples at the mesh points but from any one point in a near neighbourhood around each mesh point. Each neighbourhood of a mesh point is named as a cell. These cells belong to two categories: i) includes all cells intersecting with ; ii) includes the other cells not intersecting with . Based on (Yen, 1956; Marvasti & et.al., 2001), assume we can pick up at least one data point, , from each cell , and use them as nodes to form the multivariate interpolation series as follows:

 ~f(x) = ∞∑i=1f(xi)Φi(x) = ∑ci∈Θ0f(xi)Φi(x)^f(x)+∑ci∈Θ1f(xi)Φi(x)^g(x)

where is the basic interpolation functions, such as the cardinal interpolation functions in (Petersen & Middleton, 1962), or the fundamental Lagrande polynomials (Sauer & Xu, 1995; Gasca & Sauer, 2000). The choice of the interpolation function ensures that they satisfy the so-called Cauchy condition:

 Φi(xk)={1i=k0i≠k (10)

Since is bandlimited by , namely an entire function with finite exponent type , and each node is chosen from one distinct cell, namely the set of all nodes is an R-set (Levin, 1964) (Chapter II, §1), according to (Petersen & Middleton, 1962) and (Levin, 1964) (Chapter IV, §4) , the interpolation series in eq.(A) converges uniformly into :

Next, instead of deterministically choosing one node per cell, let’s consider the case where all the nodes are randomly chosen from the given p.d.f. . Since the bounded region is partitioned into many non-empty cells, the total number of cells in must be finite. Assume there are cells within in total, let’s denote them as . If we randomly draw one sample, the probability of having it from cell () is computed as . If we draw () independent samples, the probability of () cells remaining empty is computed as: . Thus, based on the inclusion-exclusion principle, after samples, the probability of no cell being left empty may be computed as: Because is finite and fixed, it is easy to show as , we have . In other words, as , we will surely have at least one sample from each cell in to precisely construct in eq.(A), which is guaranteed to occur in probability as . Now, let’s construct the interpolation function only using points in :

 ^f(x)=∑ci∈Θ0f(xi)⋅Φi(x). (11)

In the following, we will prove that constructed as such satisfy all requirements in Theorem 1.

Firstly, since the interpolation functions satisfy the Cauchy condition in eq.(10), thus, it is straightforward to verify .

Secondly, assume we have drawn samples from , we will show the contribution of in eq.(A) tends to be negligible as

. Based on the estimates of truncation errors in sampling in

(Long & Fang, 2004; Brown, 1969), we have

 ∥f(x)−^f(x)∥≤O(N−δ0)

where and denotes the minimum number of different projections of samples across any orthogonal axes in . Since all data samples are randomly selected, as , we are sure . Putting all of these together, , we have

 ∥^f(x)−f(x)∥≤O(N−δ0)P⟶0

as .

Finally, the expected risk of is calculated as:

 R(^f) = ∫⋯∫x∈Θ0∥f(x)−^f(x)∥2p(x)dx (12) + ∫⋯∫x∈Θ1∥f(x)−^f(x)∥2p(x)dx ≤ O(N−δ0)+4H2⋅ϵ

where denotes the maximum value of the target function, i.e., .

Because in the second term may be made to be arbitrarily small in the first step when we choose , we have

 limN→∞R(^f)P⟶0.

Therefore, we have proved the Langrage interpolation in eq.(11) using the randomly sampled points, , satisfy all requirements in Theorem 1.

If is approximately bandlimited, the above proof also holds. The only change is to choose an approximate band limit to ensure the out-of-band probability mass is arbitrarily small. Then we just use to partition in place of . Everything else in the proof remains valid.

## Appendix B Proof of Theorem 2

Proof: If the target function is strictly bandlimited, it must be an analytic function in . We may expand as the Taylor series according to Taylor’s Theorem in several variables. For notation simplicity, we adopt the well-known multi-index notation (Sauer & Xu, 1995) to represent the exponents of several variables. A multi-index notation is a -tuple of nonnegative integers, denoted by a Greek letter such as : with . If is a multi-index, we define , , (where ), and . The number is called the order of .

According to Taylor’s Theorem in several variables, may be expanded around any point as follows:

 f(x)=∞∑|α|=0∂αf(x0)α!(x−x0)α. (13)

Because the function is bandlimited by , according to the Bernstein’s inequality on Page 138 of (Achiester, 1956), we know the coefficients in the above Taylor series satisfy:

 ∥∂αf(x0)∥≤B|α|⋅H (14)

for all from , and .

Given the samples, , (), assume we may have an ideal learning algorithm to construct a new model . The optimal function should be the Taylor polynomial of with the order of . We assume the problem is poised with respect to the given (Sauer & Xu, 1995; Gasca & Sauer, 2000) 333If all data points in are randomly sampled, the problem is poised in probability 1., we need to have the same number of (or slightly more) free coefficients in polynomials as the total number of data points in , namely , we may compute roughly as . In other words, the optimal model may be represented as a multivariate polynomial:

 ^f∗n(x|DN)=n∑|α|=0cα(x−x0)α (15)

where each coefficient for all up to the order of . As in (Sauer & Xu, 1995), if the problem is poised with respect to , these Taylor polynomial coefficients may be uniquely determined by the training samples in .

As a side note, we may see why bandlimitedness and model complexity are two different concepts. The model complexity is determined by the number of free model parameters. When representing a model as a multivariate Taylor polynomial in eq.(15), the model complexity is determined by the total number of free coefficients, , in the expansion. The higher order we use, the more complex model we may end up with. However, no matter what order is used, as long as all coefficients satisfy the contraints in eq.(14) and other constraints in (Veron, 1994), the resultant model is bandlimited by .

Based on the remainder error in the multivariate Taylor’s theorem, we have

 f(x)−^f∗n(x)=∑|α|=n+1∂αf(x0+ξ⋅x)α!(x−x0)α (16)

for some . Since is bandlimited by and , we have . Furthermore, since , we choose , and after applying the multinomial theorem, we have

 ∥f(x)−^f∗n(x)∥≤Bn+1⋅H(n+1)!||x||n+1. (17)

where .

Then the perfect learning algorithm yields:

 R∗N ≤ EDN[Ex[∥^f(x|DN)−f∗n(x)∥2]] = Ex⎡⎣(Bn+1⋅H(n+1)!)2||x||2n+2⎤⎦ ≤ (Bn+1⋅H(n+1)!)2K2n+2E(|xk|2n+2)(Radon′s ineq) = ((KB)n+1⋅H(n+1)!)2⋅σ2n+2⋅2n+1Γ(n+1+12)√π = (2K2B2σ2)n+1⋅H2(n+1)!(n+12n+1) <

Refer to (Winkelbauer, 2012)

for the central absolute moments of normal distributions,

.

## Appendix C Proof of Theorem 3 (sufficient condition)

Proof: We first consider the strictly bandlimited case: assume the target function is bandlimited by and the learned model is bandlimited by . According to eq.(15), the strictly bandlimited function can be expanded around any as the Taylor’s series:

 f(x)=∞∑|α|=0bα(x−x0)α

where for all up to . Since is bandlimited by , we have . Obviously, we have as . Therefore, a bandlimited function may be represented as an infinite sum of orthogonal base functions. Since the coefficients of these terms are decaying, the series may be truncated and approximated by a finite partial sum of terms up to arbitrary precision (as goes large).

 f(x)=n∑|α|=0bα(x−x0)α+ξn(x) (18)

where denotes the remainder term in the Taylor’s expansion.

Let’s assume the input is constrained in a bounded region in , we may normalize all within a hypercube . Similar to the remainder error in eq.(17), we can easily derive:

 ∥ξn(x)∥≤(KBU)n+1H(n+1)!→0

as .

Similarly, since the learned model is also bandlimited by , we may expand it in the same way as:

 ^f(x)=n∑|α|=0dα(x−x0)α+ξ′n(x). (19)

where we have , and as .

Given , a training set of samples, may be chosen as such, , to have exactly terms in the partial sums in eqs. (18) and (19). Since all training samples are generated by the target function , thus we have:

 yj=f(xj)=n∑|α|=0bα(xj−x0)α+ξn(xj)(j=1,⋯,N).

Meanwhile, if the model is learned to yield zero empirical loss in , then also fits to every sample in as follows:

 yj=^f(xj)=n∑|α|=0dα(xj−x0)α+ξ′n(xj)(j=1,⋯,N).

Taking difference between each pair of them, we may represent the results as the following matrix format:

 ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣(x1−x0)α1⋯(x1−x0)αN⋮(xj−x0)α1⋯(xj−x0)αN⋮(xN−x0)α1⋯(xN−x0)αN⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦N×N⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣bα1−dα1⋮bαj−dαj⋮bαN−dαN⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦N×1
 =⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ξ′n(x1)−ξn(x1)⋮ξ′n(xj)−ξn(xj)⋮ξ′n(xN)−ξn(xN)⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦N×1=ξN

The

matrix in the left-hand side is the so-called multivariate Vandermonde matrix where all column vectors are constructed from orthogonal multivariate Taylor base functions. When all

in are randomly drawn from , as in (Sauer & Xu, 1995), the problem is poised with respect to in probability one. Thus, this matrix has full rank and is invertible. Meanwhile, as , the vector in the right hand side approaches 0, i.e. . Therefore, we may deduce that all coefficients converge as for all as . In other words, the learned model converges towards the target function except those negligible high-order terms. As a result, we can show the expected loss as:

 R(^f|DN) = ≤ [(KBU)n+1H+(KB′U)n+1H(n+1)!]2→0

as .

If either or is approximately bandlimited, the above proof also holds. The only change is to choose an approximate band limit to ensure the out-of-band residual is arbitrarily small. Then, we just use or in place of or . Therefore, we conclude that holds for either strictly or approximately bandlimited target functions and learned models.

## Appendix D Proof of Theorem 4 (equivalence)

Proof: Based on Theorem 3, we have

 limN→∞R(^f1)⟶0

and

 limN→∞R(^f2)⟶0.

Therefore, we have

 limN→∞∫⋯∫+∞−∞∥∥^f1(x)−^f2(x)∥∥2p(x)dx (20) ≤ limN→∞∫⋯∫+∞−∞∥∥^f1(x)−f(x)∥∥2p(x)dx +limN→∞∫⋯∫+∞−∞∥∥^f2(x)−f(x)∥∥2p(x)dx = limN→∞R(^f1)+limN→∞R(^f2)P⟶0.

## References

• Achiester (1956) Achiester, N. I. Theory of Approximation. New York : Frederick Ungner Publishing Co., 1956.
• Brown (1969) Brown, J. L. Bounds for truncation error in sampling expansions of band-limited signals. IEEE Trans. on Information Theory, IT-15:440–444, 1969.
• Cybenko (1989) Cybenko, G.

Approximation by superpositions of a sigmoidal function.

Mathematics of Control, Signals, and Systems, 2:303–314, 1989.
• Davis (1963) Davis, P. J. Interpolation and Approximation. New York : Blaisdell Publishing Company, 1963.
• Gasca & Sauer (2000) Gasca, M. and Sauer, T. Polynomial interpolation in several variables. Adv. Comput. Math., 12:377–410, 2000.
• Hornik (1991) Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4:251–257, 1991.
• Jiang (2019) Jiang, H. Why learning of large-scale neural networks behaves like convex optimization. In preprint arXiv:1903.02140, 2019.
• Klamer (1979) Klamer, D. M. Recovery of bandlimited signals using poisson samples. In Technical Report, Naval Ocean Systems Center, pp. 1345–1382, 1979.
• Leneman & Lewis (1966) Leneman, O. A. Z. and Lewis, J. B. Random sampling of random processes: Mean-square comparison of various interpolators. IEEE Trans. on Automatic Control, 11(3):396–403, 1966.
• Levin (1964) Levin, B. J. Distribution of zeros of entire functions. Providence, R.I. : American Mathematical Society, 1964.
• Levinson (1940) Levinson, N. Gap and density theorems. New York city : American mathematical society, 1940.
• Long & Fang (2004) Long, J. and Fang, G. On truncation error bound for multidimensional sampling expansion laplace transform. Analysis in Theory and Applications, 20(1):52–57, 2004.
• Marvasti & et.al. (2001) Marvasti, F. A. and et.al. Nonuniform Sampling : Theory and Practice. Kluwer Academic / Plenum Publishers, 2001.
• Micchelli (1979) Micchelli, C. A. On a numerically efficient method of computing multivariate B-splines. In W. Schempp and K. Zeller, editors, Multivariate Approximation Theory, pp. 211–248, 1979.
• Petersen & Middleton (1962) Petersen, D. P. and Middleton, D. Sampling and reconstruction of wave-number-limited functions in n-dimensional euclidean spaces. Information and Control, 5:279–323, 1962.
• Pinsky (2002) Pinsky, M. A. Introduction to Fourier Analysis and Wavelets. Brooks/Cole Thomson Learning, 2002.
• Sauer & Xu (1995) Sauer, T. and Xu, Y. On multivariate langrage interpolation. Math. Comp., 64:1147–1170, 1995.
• Shalev-Shwartz & Ben-David (2014) Shalev-Shwartz, S. and Ben-David, S. Understanding machine learning : from theory to algorithms. New York, NY, USA : Cambridge University Press, 2014.
• Stein & Weiss (1971) Stein, E. M. and Weiss, G. Introduction to Fourier Analysis on Euclidean Space. Princeton University Press, New Jersey, 1971.
• Valiant (1984) Valiant, L. G. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
• Vapnik (2000) Vapnik, V. N. The nature of statistical learning theory. New York : Springer, 2000.
• Veron (1994) Veron, M. A. H. The taylor series for bandlimited signals. J. Austral. Math. Soc. Ser. B, 36:101–106, 1994.
• Winkelbauer (2012) Winkelbauer, A. Moments and absolute moments of the normal distribution. In preprint arXiv:1209.4340, 2012.
• Wolpert (1995) Wolpert, D. H. The Mathematics of Generalization. Addison-Wesley, MA,, 1995.
• Yen (1956) Yen, J. L. On nonuniform sampling of bandwidth-limited signals. IRE Trans. Circuit Theory, CT-3:251–257, 1956.
• Zhang et al. (2016) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking generalization. In preprint arXiv:1611.03530, 2016.