Artificial neural networks have seen a dramatic resurgence in recent years, and have proven to be a highly effective machine learning method in computer vision, natural language processing, and other challenging AI problems. Moreover, successfully training such networks is routinely performed using simple and scalable gradient-based methods, in particular stochastic gradient descent.
Despite this practical success, our theoretical understanding of the computational tractability of such methods is quite limited, with most results being negative. For example, as discussed in , learning even depth-2 networks in a formal PAC learning framework is computationally hard in the worst case, and even if the algorithm is allowed to return arbitrary predictors. As common in such worst-case results, these are proven using rather artificial constructions, quite different than the real-world problems on which neural networks are highly successful. In particular, since the PAC framework focuses on distribution-free
learning (where the distribution generating the examples is unknown and rather arbitrary), the hardness results rely on carefully crafted distributions, which allows one to relate the learning problem to (say) an NP-hard problem or breaking a cryptographic system. However, what if we insist on “natural” distributions? Is it possible to show that neural networks learning becomes computationally tractable? Can we show that they can be learned using the standard heuristics employed in practice, such as stochastic gradient descent?
To understand what a “natural” distribution refers to, we need to separate the distribution over examples (given as input-output pairs ) into two components:
The input distribution : “Natural” input distributions on Euclidean space tend to have properties such as smoothness, non-degeneracy, incoherence etc.
The target function : In PAC learning, it is assumed that the output equals , where
is some unkown target function from the hypothesis class we are considering. In studying neural networks, it is common to consider the class of all networks which share some fixed architecture (e.g. feedforward networks of a given depth and width). However, one may argue that the parameters of real-world networks (e.g. the weights of each neuron) are not arbitrary, but exhibit various features such as non-degeneracy or some “random like” appearance. Indeed, networks with a random structure have been shown to be more amenable to analysis in various situations (see for instance[6, 2, 5] and references therein).
Empirical evidence clearly suggest that many pairs of input distributions and target functions are computationally tractable, using standard methods. However, how do we characterize these pairs? Would appropriate assumptions on one of them be sufficient to show learnability?
In this paper, we investigate these two components, and provide evidence that neither one
of them alone is enough to guarantee computationally tractable learning, at least with methods resembling those used in practice. Specifically, we focus on simple, shallow ReLU networks, assume that the data can be perfectly predicted by some such network, and even allowover-specification (a.k.a. improper learning), in the sense that we allow the learning algorithm to output a predictor which is possibly larger and more complex than the target function (this technique increases the power of the learner, and was shown to make the learning problem easier in theory and in practice [17, 20, 21]). Even under such favorable conditions, we show the following:
Hardness for “natural” target functions.
For each individual target function coming from a simple class of small, shallow ReLU networks (even if its parameters are chosen randomly or in some other oblivious way), we show that no algorithm invariant to linear transformations can successfully learn it w.r.t. all input distributions in polynomial time (this corresponds, for instance, to standard gradient-based methods together with data whitening or preconditioning). This result is based on a reduction from learning intersections of halfspaces. Although that problem is known to be hard in the worst-case over both input distributions and target functions, we essentially show that invariant algorithms as above do not “distinguish” between worst-case and average-case: If one can learn a particular target function with such an algorithm, then the algorithm can learn nearly all target functions in that class.
Hardness for “natural” input distributions. We show that target functions of the form for any periodic are generally difficult to learn using gradient-based methods, even if the input distribution is fixed and belongs to a very broad class of smooth input distributions (including, for instance, Gaussians and mixtures of Gaussians). Note that such functions can be constructed by simple shallow networks, and can be seen as an extension of generalized linear models . Unlike the previous result, which relies on a computational hardness assumption, the results here are geometric in nature, and imply that the gradient of the objective function, nearly everywhere, contains virtually no signal on the underlying target function. Therefore, any algorithm which relies on gradient information cannot learn such functions. Interestingly, the difficulty here is not in having a plethora of spurious local minima or saddle points – the associated stochastic optimization problem may actually have no such critical points. Instead, the objective function may exhibit properties such as flatness nearly everywhere, unless one is already very close to the global optimum. This highlights a potential pitfall in non-convex learning, which occurs already for a slight extension of generalized linear models, and even for “nice” input distributions.
Together, these results indicate that in order to explain the practical success of neural network learning with gradient-based methods, one would need to employ a careful combination of assumptions on both the input distribution and the target function, and that results with even a “partially” distribution-free flavor (which are common, for instance, in convex learning problems) may be difficult to attain here.
To prove our results, we develop some tools which may be of independent interest. In particular, the techniques used to prove hardness of learning functions of the form are based on Fourier analysis, and have some close connections to hardness results on learning parities in the well-known framework of learning from statistical queries 
: In both cases, one essentially shows that the Fourier transform of the target function has very small support, and hence does not “correlate” with most functions, making it difficult to learn using certain methods. However, we consider a more general and arguably more natural class of input distributions over Euclidean space, rather than distributions on the Boolean cube. In a sense, we show that learning general periodic functions over Euclidean space is difficult (at least with gradient-based methods), for the same reasons that learning parities over the Boolean cube is difficult in the statistical queries framework.
Recent years have seen quite a few papers on the theory of neural network learning. Below, we only briefly mention those most relevant to our paper.
In a very elegant work, Janzamin et al. 
have shown that a certain method based on tensor decompositions allows one to provably learn simple neural networks by a combination of assumptions on the input distribution and the target function. However, a drawback of their method is that it requires rather precise knowledge of the input distribution and its derivatives, which is rarely available in practice. In contrast, our focus is on algorithms which do not utilize such knowledge. Other works which show computationally-efficient learnability of certain neural networks under sufficiently strong distributional assumptions include[2, 17, 1, 22].
In the context of learning functions over the Boolean cube, it is known that even if we restrict ourself to a particular input distribution (as long as it satisfies some mild conditions), it is difficult to learn parity functions using statistical query algorithms [14, 3]. Moreover, it was recently shown that stochastic gradient descent methods can be approximately posed as such algorithms 
. Since parities can be implemented with small real-valued networks, this implies that for “most” input distributions on the Boolean cube, there are neural networks which are unlikely to be learnable with gradient-based methods. However, data provided to neural networks in practice are not in the form of Boolean vectors, but rather vectors of floating-point numbers. Moreover, some assumptions on the input distribution, such as smoothness and Gaussianity, only make sense once we consider the support to be Euclidean space rather than Boolean cube. Perhaps these are enough to guarantee computational tractability? A contribution of this paper is to show that this is not the case, and to formally demonstrate how phenomena similar to the Boolean case also occurs in Euclidean space, using appropriate target functions and distributions.
Finally, we note that 
provides improper-learning hardness results, which hold even for a standard Gaussian distribution on Euclidean space, and for any algorithm. However, unlike our paper, their focus is on hardness of agnostic learning (where the target function is arbitrary and does not have to correspond to a given class), the results are specific to the standard Gaussian distribution, and the proofs are based on a reduction from the Boolean case.
The paper is structured as follows: In Sec. 2, we formally present some notation and concepts used throughout the paper. In Sec. 4, we provide our hardness results for natural input distributions, and in Sec. 3, we provide our hardness results for natural target functions. All proofs are presented in Sec. 5.
We generally let bold-faced letters denote vectors. Given a complex-valued number , we let denote its complex conjugate, and denote its modulus. Given a function , we let denote its gradient and denote its Hessian (assuming they exist).
Neural Networks. The focus of our results will be on learning predictors which can be described by simple and shallow (depth 2 or 3) neural networks. A standard feedforward neural network is composed of neurons, each of which computes the mapping , where are parameters and
is a scalar activation function, for example the popular ReLU function. These neurons are arranged in parallel in layers, so the output of each layer can be compactly represented as , where is a matrix (each column corresponding to the parameter vector of one of the neurons), is a vector, and applies an activation function on the coordinates of . In vanilla feedforward networks, such layers are connected to each other, so given an input , the output equals
where are parameter of the -th layer. The number of layers is denoted as the depth of the network, and the maximal number of columns in is denoted as the width of the network. For simplicity, in this paper we focus on networks which output a real-valued number, and measure our performance with respect to the squared loss (that is, given an input-output example , where is a vector and , the loss of a predictor on the example is ).
Gradient-Based Methods. Gradient-based methods are a class of optimization algorithms for solving problems of the form (for some given function and assuming is a vector in Euclidean space), based on computing of approximations of at various points . Perhaps the simplest such algorithm is gradient descent, which initializes deterministically or randomly at some point , and iteratively performs updates of the form , where
is a step size parameter. In the context of statistical supervised learning problems, we are usually interested in solving problems of the form, where is some class of predictors, is a target function, and
is some loss function. Since the distributionis generally unknown, one cannot compute the gradient of this function w.r.t. directly, but can still compute approximations, e.g. by sampling one at random and computing the gradient (or sub-gradient) of . The same approach can be used to solve empirical approximations of the above, i.e. for some dataset . These are generally known as stochastic gradient methods, and are one of the most popular and scalable machine learning methods in practice.
PAC Learning. For the results of Sec. 3, we will rely on the following standard definition of PAC learning with respect to Boolean functions: Given a hypothesis class of functions from to , we say that a learning algorithm PAC-learns if for any , any distribution over , and any , if the algorithm is given oracle access to i.i.d. samples where is sampled according to , then in time , the algorithm returns a function such that
with high probability (for our purposes, it will be enough to consider any constant close to). Note that in the definition above, we allow not to belong to the hypothesis class . This is often denoted as “improper” learning, and allows the learning algorithm more power than in “proper” learning, where must be a member of .
Fourier Analysis on . In the analysis of Sec. 4, we will consider functions from to the reals or complex numbers , and view them as elements in the Hilbert space of square integrable functions, equipped with the inner product
and the norm . We use or as shorthand for the function . Any function has a Fourier transform , which for absolutely integrable functions can be defined as
where , being the imaginary unit. In the proofs, we will use the following well-known properties of the Fourier transform:
Linearity: For scalars and functions , .
Isometry: and .
Convolution: , where denotes the convolution operation: .
3 Natural Target Functions
In this section, we consider simple target functions of the form , where is the ReLU function, and is the clipping operation on the interval . This corresponds to depth-2 networks with no bias in the first layer, and where the outputs of the first layer are simply summed and moved through a clipping non-linearity (this operation can also be easily implemented using a second layer composed of two ReLU neurons). Letting , we can write such predictors as for an appropriate fixed function . Our goal would be to show that for such a target function, with virtually any choice of (essentially, as long as its columns are linearly independent), and any polynomial-time learning algorithm satisfying some conditions, there exists an input distribution on which it must fail.
As the careful reader may have noticed, it is impossible to provide such a target-function-specific result which holds for any algorithm. Indeed, if we fix the target function in advance, we can always “learn” by returning the target function, regardless of the training data. Thus, imposing some constraints on the algorithm is necessary. Specifically, we will consider algorithms which exhibit certain natural invariances to the coordinate system used. One very natural invariance is with respect to orthogonal transformations: For example, if we rotate the input instances in a fixed manner, then an orthogonally-invariant algorithm will return a predictor which still makes the same predictions on those instances. Formally, this invariance is defined as follows:
Let be an algorithm which inputs a dataset (where ) and outputs a predictor (for some function and matrix dependent on the dataset). We say that is orthogonally-invariant , if for any orthogonal matrix
, if for any orthogonal matrix, if we feed the algorithm with , the algorithm returns a predictor , where is the same as before and is such that for all .
The definition as stated refers to deterministic algorithms. For stochastic algorithms, we will understand orthogonal invariance to mean orthogonal invariance conditioned on any realization of the algorithm’s random coin flips.
For example, standard gradient and stochastic gradient descent methods (possibly with coordinate-oblivious regularization, such as regularization) can be easily shown to be orthogonally-invariant111Essentially, this is because the gradient of any function w.r.t. any is proportional to . Thus, if we multiply by an orthogonal , the gradient also gets multiplied by . Since , the inner products of instances and gradients remain the same. Therefore, by induction, it can be shown that any algorithm which operates by incrementally updating some iterate by linear combinations of gradients will be rotationally invariant.. However, for our results we will need to make a somewhat stronger invariance assumption, namely invariance to general invertible linear transformations of the data (not necessarily just orthogonal). This is formally defined as follows:
One well-known example of such an algorithm (which is also invariant to affine transformations) is the Newton method 
. More relevant to our purposes, linear invariance occurs whenever an orthogonally-invariant algorithm preconditions or “whitens” the data so that its covariance has a fixed structure (e.g. the identity matrix, possibly after a dimensionality reduction if the data is rank-deficient). For example, even though gradient descent methods are not linearly invariant, they become so if we precede them by such a preconditioning step. This is formalized in the following theorem:
Let be any algorithm which given , computes the whitening matrix (where , is a thin222That is, if is of size , then is of size , is of size , and is of size . SVD decomposition of ), feeds to an orthogonally-invariant algorithm, and given the output predictor , returns the predictor . Then is linearly-invariant.
It is easily verified that the covariance matrix of the transformed instances is the identity matrix (where ), so this is indeed a whitening transform. We note that whitening is a very common preprocessing heuristic, and even when not done explicitly, scalable approximate whitening and preconditioning methods (such as Adagrad 12]) are very common and widely recognized as useful for training neural networks.
To show our result, we rely on a reduction from a PAC-learning problem known to be computationally hard, namely learning intersections of halfspaces. These are Boolean predictors parameterized by and , which compute a mapping of the form
(where we let correspond to ‘true’ and to ‘false’). The problem of PAC-learning intersections of halfspaces over the Boolean cube () has been well-studied. In particular, two known hardness results are the following:
Klivans and Sherstov  show that under a certain well-studied cryptographic assumption (hardness of finding unique shortest vectors in a high-dimensional lattice), no algorithm can PAC-learn intersection of halfspaces (where is any positive constant), even if the coordinates of and are all integers, and .
Daniely and Shalev-Shwartz  show that under an assumption related to the hardness of refuting random K-SAT formulas, no algorithm can PAC-learn intersections of halfspaces (as ), even if the coordinates of and are all integers, and .
In the theorem below, we will use the result of , which applies to an intersection of a smaller number of halfspaces, and with smaller norms. However, similar results can be shown using , at the cost of worse polynomial dependencies on .
The main result of this section is the following:
Consider any network (where ), which satisfies the following:
are linearly independent, so the smallest singular valueof is strictly positive.
Then under the assumption stated in , there is no linearly-invariant algorithm which for any and any distribution over vectors of norm at most , given only access to samples where , runs in time and returns with high probability a predictor such that
Note that the result holds even if the returned predictor has a different structure than , and is of a larger size than . Thus, it applies even if the algorithm is allowed to train a larger network or more complicated predictor than .
The proof (which is provided in Sec. 5) can be sketched as follows: First, the hardness assumption for learning intersection of halfspaces is shown to imply hardness of learning networks as described above (and even if has linearly independent columns – a restriction which will be important later). However, this only implies that no algorithm can learn for all and all input distributions . In contrast, we want to show that learning would be difficult even for some fixed . To do so, we show that if an algorithm is linearly invariant, then the ability to learn with respect to some and all distributions means that we can learn with respect to all and all . Roughly speaking, we argue that for linearly-invariant algorithms, “average-case” and “worst-case” hardness are the same here. Intuitively, this is because given some arbitrary , we can create a different input distribution , so that “look like” under some linear transformation (see Figure 1 for an illustration). Therefore, a linearly-invariant algorithm which succeeds on one will also succeed on the other.
A bit more formally, let us fix some (with linearly independent columns), and suppose we have a linearly-invariant algorithm which can successfully learn with respect to any input distribution. Let be some other matrix and distribution with respect to which we wish to learn (where has full column rank and is of the same size as ). Then it can be shown that there is an invertible matrix such that . Since the algorithm successfully learns with respect to any input distribution, it would also successfully learn if we use the input distribution defined by sampling and returning . This means that the algorithm would succesfully learn from data distributed as
Since the algorithm is linearly-invariant, it can be shown that this implies successful learning from where , as required.
In the sketch above, we have ignored some technical issues. For example, we need to be careful that has a bounded spectral norm, so that it induces a linear transformation which does not distort norms by too much (as all our arguments apply for input distributions supported on a bounded domain). A second issue is that if we apply a linearly-invariant algorithm on a dataset transformed by , then the invariance is only with respect to the data, not necessarily with respect to new instances sampled from the same distribution (and this restriction is necessary for results such as Thm. 1 to hold without further assumptions). However, it can be shown that if the dataset is large enough, invariance will still occur with high probability over the sampling of , which is sufficient for our purposes.
4 Natural Input Distributions
In this section, we consider the difficulty of gradient-based methods to learn certain target functions, even with respect to smooth, well-behaved distributions over . Specifically, we will consider functions of the form , where is a vector of bounded norm and is a periodic function. Note that if is continuous and piecewise linear, then can be implemented by a depth-2 neural ReLU network on any bounded subset of the domain. More generally, any continuous periodic function can be approximated arbitrarily well by such networks.
Our formal results rely on Fourier analysis and are a bit technical. Hence, we precede them with an informal description, outlining the main ideas and techniques, and presenting a specific case study which may be of independent interest (Subsection 4.1). The formal results are presented in Subsection 4.2.
4.1 Informal Description of Results and Techniques
Consider a target function of the form , and any input distribution whose density function can be written as the square of some function (the reason for this will become apparent shortly). Suppose we attempt to learn this target function (with respect to the squared loss) using some hypothesis class, which can be parameterized by a bounded-norm vector in some subset of an Euclidean space (not necessarily of the same dimensionality as ), so each predictor in the class can be written as for some fixed mapping . Thus, our goal is essentially to solve the stochastic optimization problem
In this section, we study the geometry of this objective function, and show that under mild conditions on , and assuming the norm of is reasonably large, the following holds:
For any fixed , the value of the objective function is almost independent of , in the sense that if we pick the direction of uniformly at random, the value is extremely concentrated around a fixed value independent of (e.g. exponentially small in for a Gaussian or a mixture of Gaussians).
Similarly, the gradient of the objective function with respect to is almost independent of , and is extremely concentrated around a fixed value (again, exponentially small in for, say, a mixture of Gaussians).
Therefore, assuming is reasonably large, any standard gradient-based method will follow a trajectory nearly independent of . In fact, in practice we do not even have access to exact gradients of Eq. (2), but only to noisy and biased versions of it (e.g. if we perform stochastic gradient descent, and certainly if we use finite-precision computations). In that case, the noise will completely obliterate the exponentially small signal about in the gradients, and will make the trajectory essentially independent of . As a result, assuming and the distribution is such that the function is sensitive to the direction of , it follows that these methods will fail to optimize Eq. (2) successfully. Finally, we note that in practice, it is common to solve not Eq. (2) directly, but rather its empirical approximation with respect to some fixed finite training set. Still, by concentration of measure, this empirical objective would converge to the one in Eq. (2) given enough data, so the same issues will occur.
An important feature of our results is that they make virtually no structural assumptions on the predictors . In particular, they can represent arbitrary classes of neural networks (as well as other predictor classes). Thus, our results imply that target functions of the form , where is periodic, would be difficult to learn using gradient-based methods, even if we allow improper learning and consider predictor classes of a different structure.
To explain how such results are attained, let us study a concrete special case (not necessarily in the context of neural networks). Consider the target function , and the hypothesis class (parameterized by ) of functions . Thus, Eq. (2) takes the form
Furthermore, suppose the input distribution is a standard Gaussian on . In two dimensions and for , the objective function in Eq. (2) turns out to have the form illustrated in Figure 2. This objective function has only three critical points: A global maximum at , and two global minima at and . Nevertheless, it would be difficult to optimize using gradient-based methods, since it is extremely flat everywhere except close to the critical points. As we will see shortly, the same phenomenon occurs in higher dimensions. In high dimensions, if the direction of is chosen randomly, we will be overwhelmingly likely to initialize far from the global minima, and hence will start in a flat plateau in which most gradient-based methods will stall333Although there are techniques to overcome flatness (e.g. by normalizing the gradient [19, 10]), in our case the normalization factor will be huge and require extremely precise gradient information, which as discussed earlier, is unrealistic here..
We now turn to explain why Eq. (3) has the form shown in Figure 2. This will also help to illustrate our proof techniques, which apply much more generally. The main idea is to analyze the Fourier transform of Eq. (3). Letting denote the function , we can write Eq. (3) as
where is the standard norm over the space of square integrable functions. By standard properties of the Fourier transform (as described in Sec. 2), this squared norm of a function equals the squared norm of the function’s Fourier transform, which equals in turn
can be shown to equal , where is Dirac’s delta function (a “generalized” function which satisfies for all , and ). Plugging this into the above and simplifying, we get
If is a standard Gaussian, can be shown to equal the Gaussian-like function where . Plugging back, the expression above is proportional to
The expression in each inner parenthesis can be viewed as a mixture of two Gaussian-like functions, with centers at (or ). Thus, if is far from , these two mixtures will have nearly disjoint support, and Eq. (5) will have nearly the same value regardless of – in other words, it is very flat. Since this equation is nothing more than a re-formulation of the original objective function in Eq. (3) (up to a constant), we get a similar behavior for Eq. (3) as well.
This behavior extends, however, much more generally than the specific objective in Eq. (3). First of all, we can replace the standard Gaussian distribution by any distribution such that has a localized support. This would still imply that Eq. (4) refers to the difference of two functions with nearly disjoint support, and the same flatness phenomenon will occur. Second, we can replace the function by any periodic function . By properties of the Fourier transform of periodic functions, we still get localized functions in the Fourier domain (more precisely, the Fourier transform will be localized around integer multiples of , up to scaling). Finally, instead of considering hypothesis classes of predictors similar to the target function, we can consider quite arbitrary mappings . Even though this function may no longer be localized in the Fourier domain, it is enough that only the target function will be localized: That implies that regardless how looks like, under a random choice of , only a minuscule portion of the mass of overlaps with the target function, hence getting sufficient signal on will be difficult.
As mentioned in the introduction, these techniques and observations have some close resemblance to hardness results for learning parities over the Boolean cube in the statistical queries learning model . There as well, one considers a Fourier transform (but on the Boolean cube rather than Euclidean space), and essentially show that functions with a “localized” Fourier transform are difficult to “detect” using any fixed function. However, our results are different and more general, in the sense that they apply to generic smooth distributions over Euclidean space, and to a general class of periodic functions, rather than just parities. On the flip side, our results are constrained to methods which are based on gradients of the objective, whereas the statistical queries framework is more general and considers algorithms which are based on computing (approximate) expectations of arbitrary functions of the data. Extending our results to this generality is an interesting topic for future research.
4.2 Formal Results
We now turn to provide a more formal statement of our results. The distributions we will consider consist of arbitrary mixtures of densities, whose square roots have rapidly decaying tails in the Fourier domain. More precisely, we have the following definition:
Let be some function from to . A function is Fourier-concentrated if its square root belongs to , and satisfies
where is the indicator function of .
A canonical example is Gaussian distributions: Given a (non-degenerate, zero-mean) Gaussian density function with covariance matrix , its square root is proportional to a Gaussian with covariance , and its Fourier transform is well-known to be proportional to a Gaussian with covariance . By standard Gaussian concentration results, it follows that is Fourier-concentrated with where
is the minimal eigenvalue of. A similar bound can be shown when the Gaussian has some arbitrary mean. More generally, it is well-known that smooth functions (differentiable to sufficiently high order with integrable derivatives) have Fourier transforms with rapidly decaying tails. For example, if we consider the broad class of Schwartz functions (characterized by having values and all derivatives decaying faster than polynomially in ), then the Fourier transform of any such function is also a Schwartz function, which implies super-polynomial decay of (see for instance , Chapter 11 and Proposition 11.25).
We now formally state our main result for this section. We consider any predictor of the form , where is some fixed function and is a parameter vector coming from some domain , which we will assume w.l.o.g. to be a subset of some Euclidean space444More generally, our analysis is applicable to any separable Hilbert space. (for example, can represent a network of a given architecture, with weights specified by ). When learning based on data coming from an underlying distribution, we are essentially attempting to solve the optimization problem
Assume that is differentiable w.r.t. , any gradient-based method to solve this problem proceeds by computing (or approximating) at various points . However, the following theorem shows that at any , and regardless of the type of predictor or network one is attempting to train, the gradient at is virtually independent of the underlying target function, and hence provides very little signal:
is a periodic function of period , which has bounded variation on every finite interval.
is a density function on , which can be written as a (possibly infinite) mixture , where each is an Fourier-concentrated density function.
At some fixed , for some .
Then for some universal positive constants , if , and is a vector of norm chosen uniformly at random, then
We note that bounded variation is weaker than, say, Lipschitz continuity. Assuming decays rapidly with – say, exponentially in as is the case for a Gaussian mixture – we get that the bound in the theorem is on the order of .
Overall, the theorem implies that if are moderately large, the gradient of at any point is extremely concentrated around a fixed value, independent of . This implies that gradient-based methods, which attempt to optimize via gradient information, are unlikely to succeed. One way to formalize this is to consider any iterative algorithm (possibly randomized), which relies on an -approximate gradient oracle to optimize : At every iteration , the algorithm chooses a point , and receives a vector such that . In our case, we will be interested in such that is on the order of the bound in Thm. 3. Since the bound is extremely small for moderate (say, smaller than machine precision), this is a realistic model of gradient-based methods on finite-precision machines, even if one attempts to compute the gradients accurately. The following theorem implies that if the number of iterations is not extremely large (on the order of , e.g. iterations for Gaussian mixtures), then with high probability, a gradient-based method will return the same predictor independent of . However, since the objective function is highly sensitive to the choice of , this means that no such gradient-based method can train a reasonable predictor.
Assume the conditions of Thm. 3, and let be the cube root of the bound specified there (uniformly over all ). Then for any algorithm as above and any , conditioned on an event which holds with probability over the choice of , its output after at most iterations will be independent of .
5.1 Proof of Thm. 1
Let denote the whitening matrix employed if we transform the instances by some invertible matrix (that is, becomes ), and the whitening matrix employed for the original data.
Using the same notation as in the theorem, it is easily verified that , and , where is an SVD decomposition of the matrix . Since both and are
matrices with rows consisting of orthonormal vectors, they are related by an orthogonal transformation (i.e. there is an orthogonal matrixsuch that ). Therefore, . Since the data is fed to an orthogonally-invariant algorithm, its output satisfies . This in turn implies , and hence . Multiplying both sides on the right by and taking a transpose, we get that , and hence . In words, and are the same up to an orthogonal transformation depending on . Therefore,
so we see that the returned predictor makes the same predictions over the dataset, independent of the transformation matrix .
5.2 Proof of Thm. 2
We start with the following auxiliary theorem, which reduces the hardness result of  to one about neural networks of the type we discuss here:
Under the assumption stated in , the following holds for any (as ):
There is no algorithm running in time , which for any distribution on , and any (where and ), given only access to samples where , returns with high probability a function such that
Suppose by contradiction that there exists an algorithm which for any distribution and as described in the theorem, returns a function such that with high probability.
In particular, let us focus on distributions supported on . For these distributions, we argue that any intersection of halfspaces on specified by with integer coordinates, and integer , can be specified as for some function as described in the theorem statement. To see this, note that for any and in the support of , is an integer, hence
Therefore, for any distribution over examples labelled by an intersection of halfspaces (with integer-valued coordinates and bounded norms), by feeding with , the algorithm returns a function , such that with high probability, , and therefore
In particular, if we consider the Boolean function , where if and if , we argue that . Since is arbitrary, and specifies an intersection of halfspaces, this would contradict the hardness result of , and therefore prove the theorem. This argument follows from the following chain of inequalities, where denotes the indicator function:
Thm. 5 holds even if we restrict to be linearly independent, with .
Suppose by contradiction that there exists an algorithm which succeeds for any as stated above. We will describe how to use to get an algorithm which succeeds for any as described in Thm. 5, hence reaching a contradiction.
Specifically, suppose we have access to samples , where is supported on , and where is any matrix as described in Thm. 5. We do the following: We map every to by , run on the transformed samples to get some predictor , and return the predictor .
To see why this reduction works, we note that the mapping we have defined, where is distributed according to , induces a distribution on . Let be the matrix (that is, we add another unit matrix below ). We have , so the minimal eigenvalue of is at least , hence , so satisfies the conditions in the proposition. Moreover, the norm of each column of is larger than the norm of the corresponding column in by at most , so the norm constraint in Thm. 5 still holds. Finally, for all , and therefore . Thus, the distribution of (which is used to feed the algorithm ) is a valid distribution corresponding to the conditions of the proposition and Thm. 5 (only in dimension instead of ), so returns with high probability a predictor such that
However, , , so the returned predictor satisfies
This contradicts Thm. 5, which states that no efficient algorithm can return such a predictor for any sufficiently large dimension and norm bound . ∎
In the definitions of orthogonal invariance and linear invariance, we only required the invariance to hold with respect to instances in the dataset. A stronger condition is that the invariance is satisfied for any . However, the following lemma shows that invariance w.r.t. a dataset sampled i.i.d. from some distribution is sufficient to imply invariance w.r.t. “nearly all” (under the same distribution):
Suppose the dataset is sampled i.i.d. from some distribution (where ), then the following holds with probability at least for any : For any invertible and linearly-invariant algorithm (or orthogonal and orthogonally-invariant algorithm), conditioned on the algorithm’s internal randomness, the returned matrices and (with respect to the original data and the data transformed by respectively) satisfy
It is enough to prove that with probability at least over the sampling of ,
This is because the event for all means that for any in the span of .
Let be sampled i.i.d. according to . Considering probabilities over this sample, we have
where the latter inequality is because each is a -dimensional vector, hence the number of times we can get a vector not in the span of the previous ones is at most . Moreover, since the vectors are sampled i.i.d, we have
This is equivalent to