1 Introduction
Semisupervised learning is a research field of growing interest in machine learning. It is attractive due to the availability of large amounts of unlabeled data, and the growing desire to exploit it in order to improve the quality of predictions and inference in downstream applications. Although many proposed methods have been successful empirically, a formal understanding of the pros and cons of different semisupervised methods is still incomplete.
The goal of this paper is to study the tradeoffs between some recently proposed Laplacian regularization algorithms for graphbased semisupervised learning. In the noiseless setting, the problem amounts to a particular form of interpolation of a graphbased function. More precisely, consider a graph
where is a set of vertices, and is the set of edges equipped with a set of nonnegative edge weights. For some subset of the vertex set, say with cardinality , and an unknown function , suppose that we are given observations of the function at the specified subset of vertices. Our goal is to use the observed values to make predictions of the function values at the remaining vertices in a way that agrees with as much as possible.In order to render this problem wellposed, the behavior of the function must be tied to the properties of the graph . In a statistical context, one such requirement is such that the marginal distribution of the points be related to the regression function —for instance, by requiring that be smooth on regions of high density. This assumption and variants thereof are collectively referred to as the cluster assumption in the semisupervised learning literature.
Under such a graphbased smoothness assumption, one reasonable method for extrapolation is to penalize the change of the function value between neighboring vertices while agreeing with the observations. A widely used approach involves using the based Laplacian as a regularizer; doing so leads to the objective
(1) 
where the penalization is enforced by the quadratic form given by the graph Laplacian [ZGL03]. This method is closely tied to heat diffusion on the graph and has a probabilistic interpretation in terms of a random walk on the graph. Unfortunately, solutions of this objective are badly behaved in the sense that they tend to be constant everywhere except for the points associated with observations [NSZ09]. The solution must then have sharp variations near those points in order to respect the measurement constraints.
Given this undesirable property, several alternative methods have been proposed in recent work (e.g., [AL11, BZ13, ZB11, KRSS15]). One such formulation is based on interpolating the observed points exactly while penalizing the maximal gradient value on neighboring vertices:
(2) 
Any solution to this variational problem is known as an infminimizer. [KRSS15] recently proposed a fast algorithm for solving the optimization prolblem (2
). In fact, their algorithm finds a specific solution that not only minimizes the maximum gradient value, but also the second largest one among all minimizers of the former and so on—that is to say, they find a solution such that the gradient vector
is minimal in the lexicographic ordering, which they refer to as the lexminimizer. They observed empirically that when grows to infinity while both the degree of the graph and number of observations are held fixed, the lexminimizer is a better behaved solution than its Laplacian counterpart. More precisely, the observed advantage is twofold: (a) the solution is a better interpolation of the observed values; and (b) the average error for the lexminimizer remains stable, while it quickly diverges with when is the Laplacian minimizer. Their experiments together with the known limitations of the Laplacian regularization method point to the possible superiority of based formulation (2) over the based (1) in a semisupervised setting. However, we currently lack a theoretical understanding of this assertion. Accordingly, we aim to fill this gap in the theoretical understanding of Laplacianbased regularization by studying both formulations in the asymptotic limit as the graph size goes to infinity.We conduct our investigation in the context of a more general objective that encompasses the approaches (1) and (2) as special cases. In particular, for a positive integer , we consider the variational problem
(3) 
The objective is referred to as the (discrete) Laplacian of the graph in the literature; for instance, see the papers by [ZS05] and [BH09], as well as references therein. It offers a way to interpolate between the 2Laplacian regularization method and the infminimization approach. It is then natural to consider the general family of interpolation problems based on Laplacian regularization—namely
(4) 
Formulations (1) and (2) are recovered respectively with and . Indeed, in the latter case, observe that . Moreover, under certain regularity assumptions ([EH90, KRSS15]), it follows that the lexminimizer is the limit of the (unique) minimizers of as grows to infinity^{1}^{1}1The construction of this sequence of minimizers is known as the Pólya algorithm, and the study of its rate of convergence is a classical problem in approximation theory ([DLT83, ET87, LT89, EH90]).—that is, we have the equivalence
(5) 
Our contributions:
We analyze the behavior of Laplacian interpolation when the underlying graph is drawn from a geometric random model. Our first main result is to derive a variational problem that is the almostsure limit of the formulation (4) in the asymptotic regime when the size of the graph grows to infinity. For a twice differentiable function , we use and to denote its gradient and Hessian, respectively. Letting denote the density of the vertices in a latent space, we show that for any even integer , solutions of this variational problem must satisfy the partial differential equation
where is the usual Laplacian operator, while is the Laplacian operator, which is defined to be zero when .
This theory then yields several predictions on the behavior of these regularization methods when the number of labeled examples is fixed while the number of unlabeled examples becomes infinite: the method leads to degenerate solutions when
; i.e., they are discontinuous, a manifestation of the curse of dimensionality. On the other hand, the solution is continuous when
. The solution is dependent on the underlying distribution of the data for all finite values of ; however, when , the solution is not dependent on the underlying density . Consequently, as the graph size increases, the lex and infminimizers end up interpolating the observed values without exploiting the additional knowledge of the density of the features that is provided by the abundance of unlabeled data.In order to illustrate the consequences of this last property, we study a simple onedimensional regression problem whose intrinsic difficulty is controlled by a parameter . We show that the 2Laplacian method has an estimation rate independent of while the infinityminimization approach has a rate that is increasing in . As shown by our analysis, this important difference can be traced back to whether or not the method leverages the knowledge of . We also provide an array of experiments that illustrate some pros and cons of each method, and show that our theory predicts these behaviors accurately. Overall, our theory lends support to using intermediate value of that will lead to nondegenerate solutions while remaining sensitive to the underlying data distribution.
2 Generative Model
We follow the popular assumption in the semisupervised learning literature that the graph represents the metric properties of a cloud point in dimensional Euclidean space ([ZGL03, BCH03, BN04, Hei06a, NSZ09, ZB11]
). More precisely, suppose that we are given a probability distribution on the unit hypercube
having a smooth density with respect to the Lebesgue measure, as well as a bounded decreasing function such that . We then draw an i.i.d. sequence of samples from ; these vectors will be identified with the vertices of the graph : . Finally, we associate to each pair of vertices the edge weight , where is a bandwidth parameter. We use to denote the random graph generated in this way.Degree asymptotics
Given a sequence of graphs generated as above, we study the behavior of the minimizers of in the limit . A first step is to understand the behavior of the graph itself, and in particular its degree distribution in this limit. In order to gain intuition for this issue. consider the special case when . If the bandwidth parameter is held fixed, then any sequence of points that fall in a ball of radius will form a clique. Thus, the graph will contain roughly cliques, each with approximately vertices. It is typically desired that the sequence of graphs be sparse with an appropriate degree growth (e.g., constant or logarithmic growth) so that it converges to the underlying manifold. In order to enforce this behavior, the bandwidth parameter should tend to zero as the sample size increases.
Under this assumption, it can be shown that the scaled degree at any vertex , given by
concentrates around . A precise statement can be found in [Hei06a]; roughly speaking, it follows from the fact that for any fixed point in , as goes to zero and under a smoothness assumption on the density , the probability that a random vector falls in the neighborhood of scales as .
3 Variational problem and related PDE
In this section, our main goal is to study the behavior of the solution in the limit as the sample size and the bandwidth . As discussed above, it is natural to consider scalings under which decreases in parallel with the increase in . However, for simplicity, we follow [NSZ09] and first take the sample size to infinity with the bandwidth held constant, and then let go to zero.^{2}^{2}2In the case , it is known that the same limiting objects are recovered when a joint limit in and is taken with but ([Hei06a]). Based on the discussion above, this scaling implies a superlogarithmic degree sequence. It is still to be verified if the same result holds for all . Our first result characterizes the asymptotic behavior of the objective .
Theorem 3.1.
Let be continuously differentiable with a bounded derivative, and let be a bounded density. Then for any even integer , we have
(6) 
where .
We provide the proof in Appendix A; it involves applying the strong law for
statistics, as well as an auxiliary result on the isotropic nature of the integral of a rankone tensor.
Based on Theorem 3.1, the asymptotic limit of the semisupervised learning problem is a supervised nonparametric estimation problem with a regularization term given by the functional —namely, the problem
(7) 
Our next main result characterizes the solutions of this optimization problem in terms of a partial differential equation known as the (weighted) Laplacian equation. Here the word “weighted” refers to the term in the functional (7) ([HKM12, Obe13]).
Let us introduce various pieces of notation that are useful in the sequel. Given a vector field , we use to denote denote its divergence. For a scalarvalued function , we let
denote the (standard) Laplacian operator and the Laplacian operator, respectively.
Theorem 3.2.
The proof employs standard tools from calculus of variations [GF63]. We note here that does not need to be twice differentiable for the above result to hold ([HKM12]) in which case equations (8a) and (8b) have to be understood in the viscosity sense ([CEG01, AS10]). Twice differentiability is assumed so only for ease of the proof (see Appendix B).
When
is the uniform distribution, equation (
8b) reduces to the partial differential equation (PDE)which is known as the Laplacian equation and often studied in the PDE literature ([HKM12, Obe13]). If one divides by and lets , one obtains the infinityLaplacian equation
(9) 
subject to measurement constraints for . This problem has been studied by various authors (e.g., [CEG01, ACJ04, PSSW09, AS10]). Note that in dimension , we have , and equation (8b) reduces to
Therefore, if we specialize to the case , the 2Laplacian regularization method solves the differential equation , whereas if we specialize to , then the infminimization method solves the differential equation . Note that the two equations coincide only when is the uniform distribution, in which case is uniformly zero.
4 Insights and predictions
Our theory from the previous section allows us to make a number of predictions about the behavior of different regularization methods, which we explore in this section.
4.1 Infminimization is insensitive to
Observe that the effect of the datagenerating distribution has disappeared in equation (9). One could also see this by taking the limit in the objective to be minimized in Theorem 3.1—in particular, assuming that has full support, we have .
From the observations above, one can see that in the limit of infinite unlabeled data—i.e., once the distribution is available—the 2Laplacian regularization method, as well as any Laplacian method based on finite (even) , incorporates knowledge of in computing the solution; in contrast, for , the infminimization method does not (see Figure 2). On the other hand, it has been shown that the Laplacian method is badly behaved for in the sense that the solution tends to be uninformative (constant) everywhere except on the points of observation. The solution must then have sharp variations on those points in order to respect the measurement constraints (see Figure 1 for an illustration of this phenomenon). We show in the next section that this problem plagues the Laplacian minimization approach as well whenever .
(a)  (b) 
4.2 Laplacian regularization is degenerate for
In this section, we show that Laplacian regularization is degenerate for all . This issue was originally addressed by [NSZ09], who provided an example that demonstrates the degeneracy for for . Here we show that the underlying idea generalizes to all pairs with . Recall that Theorem 3.1 guarantees that
In the remainder of our analysis, we treat the cases and separately.
Case :
Beginning with the case , we first set and then let be any point on the unit sphere (i.e., ). Define the function for some , and let the observed values be for . Using the fact that and assuming that is uniformly upper bounded by on , we have
where denotes the Euclidean ball of radius centered at the origin, and denotes the Lebesgue volume in . Consequently, we have , so the infimum of is achieved for the trivial function that is everywhere except at the origin, where it takes the value . The key issue here is that grows at a rate of while the measure of the “spike” in the gradient shrinks at a rate of .
Case :
On the other hand, when , then we take , for which we also have and . With this choice, we have
where step (i) follows from a change of variables from to the radial coordinate ; step (ii) follows by the variable change ; and step (iii) follows by upperbounding by in the numerator inside the integral. Again, we find that . Thus, in order to avoid degeneracies, it is necessary that .
It is worth noting that [AL11] studied the problem of computing the socalled resistances of a graph, which are a family of distances on the graph having a formulation similar —in fact, dual— to the Laplacian regularization method considered in the present paper, and where . They established a phase transition in the value of for the geometric random graph model, where above the threshold , the resistances “[…] depend on trivial local quantities and do not convey any useful information […]” about the graph, while below the threshold , these resistances encode interesting properties of the graph. They conclude by suggesting the use of Laplacian regularization with . The latter condition can be read . However, as shown by the examples above, this choice is still problematic, and in fact, the choice is the smallest admissible value for .
We also note that the example for extends to an arbitrary number of labeled points: one simply has a spike for each point. Undesirable behavior arises as long as the set of observed points is of measure zero. Finally, we note that both the above example can be adapted to the case where the squared loss is optimized along with the regularizer instead of imposing the hard constraint (see Appendix D). The issue is that the regularizer is too weak and allows to choose the solution from a very large class of functions.
4.3 Laplacian solution is smooth for
At this point, a natural question is whether the condition is also sufficient to ensure that the solution is wellbehaved. In a specific setting to be described here, the answer is yes^{3}^{3}3Interestingly enough, the Laplacian equation has been extensively studied in nonlinear potential theory. It is in fact the prototypical example of a nonlinear degenerate elliptic equation. The regularity of the solutions is well understood for any real number (see e.g. [HKM12]). For our purposes however, we do not need the full power of this theory.. The underlying reason is the Sobolev embedding theorem. More precisely, let denote the weighted Sobolev space of all (weakly) differentiable functions on such that the seminorm
If, moreover, we assume is strictly positive almost everywhere and restrict the above class to functions vanishing on the boundary, then actually defines a norm. When , and under additional regularity conditions on (e.g. upper and lowerbounded by constants a.e.), the space can be embedded continuously into the space of Hölder functions of exponent , i.e. functions such that for all for some dimensiondependent constant . For details, see Theorem 11.34 of [Leo09] or Lemma 5.17 of [AF03]. [BO92] provide some relaxed conditions on . Since the minimizer of is such that , the function is in the Sobolev class , and therefore it automatically inherits the Hölder smoothness property, i.e. the Laplacian solution is smooth for , asymptotically as ^{4}^{4}4If the graph is finite, then the solution might still contain small spikes as apparent in Figures 1 and 2.. Incidentally, via the examples in the previous section, it is clear that no such embedding exists if .
4.4 An example where infminimization interpolates well
By extension to the case , the infinityLaplacian solutions also enjoy continuity (this solution is actually Lipschitz based on its interpretation as the absolutely minimal Lipschitz extension of the observations ([ACJ04]). It was also argued [KRSS15], based on experimental results, that the infminimization method has a better behavior in higher dimensions in terms of faithfulness to the observations. We illustrate this point by considering a simple example, similar to the one above, for which the Laplacian equation (9) produces a sensible solution. With and denoting the Euclidean unit sphere, suppose that we limit ourselves to functions satisfying the observation constraints and for all .
Without any further information on the datagenerating process, a reasonable fit is the function . We claim that it is the only radially symmetric solution to the differential equation with the boundary constraints and for all . In order to verify this claim, let for a function . For any nonzero , we have
Then
Given the boundary conditions on , the only solution to = 0, is given by , meaning that . On the other hand, the latter is not a solution to , unless .
In summary, this section reveals a tradeoff between smoothness and sensitivity to the datagenerating density in Laplacian regularization: the solution is strongly sensitive to but is nonsmooth for small values of , while it is smooth but weakly dependent on for large and infinite values of . The transition from degeneracy to smoothness happens at a sharp threshold , while the dependence on weakens with larger and larger without a threshold.
While the property of smoothness is an obvious quality in an estimation setting, and may lead to improved statistical rates if assumed first hand —especially if the signal one wishes to recover is itself smooth— it is less obvious how to quantify the advantages entailed by the sensitivity to the underlying datagenerating density; especially when the latter is available and is to be incorporated in the design of an estimator. We provide in the next section a simple, onedimensional regression example where the regression function is tied to the density via the cluster assumption, and where a difference in estimation rates between the and methods is exhibited. This difference is explicitly due to the fact that the method leverages the knowledge of while does not.
5 The price of “forgetting”
We consider in this section a simple estimation example in one dimension where the asymptotic formulation of 2Laplacian regularization method achieves a better rate of convergence than that of the infminimization method. Such an advantage of the method over the method should be conceivable under the cluster assumption: the regularizer will encodes information about the target function via while the regularizer does not.
Let the target function and the datagenerating density be supported on the interval . For some small , we construct a density that takes a uniformly small value over the interval , and takes and a large value on the complementary set . We also let have a high Lipschitz constant on the interval and be constant otherwise. More precisely, the density and function are constructed as follows:
(10) 
The constants and are related by the equation so that the density integrates to 1, and we think of as being much smaller than , i.e. . Consider the following two classes of functions corresponding to the regularizer for and respectively:
We define the associated norms and on and respectively. Observe that while is upper bounded by a constant:
Taking , the above integral is bounded above by .
We draw points independently from and observe the responses where
are i.i.d. standard normal random variables,
. We compare the following two Mestimators:and 
in terms of the rate of decay of the error , where . Note that this error can in principle tend to zero as grows to infinity since the target belongs to the hypothesis class in both of the considered cases, i.e. there is no approximation error.
Theorem 5.1.
There are universal constants such that for any , the estimator satisfies the bound
(11) 
with probability at least . On the other hand, the estimator satisfies the bound
(12) 
with probability at least .
We can thus compare upper bounds on the rate of estimation of the and methods respectively. The upper bound (12) shows a dependence in , while the bound (11) shows no dependence on . One might ask if these bounds are tight. In particular, a question of interest is whether the estimator adapts to the target without the need to know the density , in which case the corresponding estimator would achieve a better rate. While we think our bounds could be sharpened, we strongly suspect that the estimator cannot achieve a rate independent of (in contrast to the estimator). We provide an array of simulations showing that the rate of deteriorates as gets small, where the rate of the method stays the same (see Figure 3).
(a)  (b) 
(c)  (d) 
5.1 Main ideas of the proof
The first nonasymptotic bound (12) in Theorem 5.1 follows in a straightforward way from known results on the minimax rate of estimation on the class of Lipschitz functions. Indeed, the rate of estimation on this class with Lipschitz constant is , and in our case . On the other hand, the second result (11) follows by recognizing that the class is a weighted Sobolev space of order 1, which is a Reproducing Kernel Hilbert Space (RKHS). The associated kernel, as identified by [NSZ09], is given by
(13) 
It is known that the rate of estimation on a ball of radius
of an RKHS is tightly related to the decay of the eigenvalues of the kernel. More precisely, the rate of estimation is upperbounded with high probability by the smallest solution
to the inequality(14) 
where is the sequence of eigenvalues of the kernel ([Kol06, Men02, BBM02, vdG00]). Our next result upperbounds the rate of decay of these eigenvalues.
Lemma 5.1.
For any , the eigenvalues of the kernel form a doubly indexed sequence with , . This sequence satisfies the upper bound
6 Related work
The discrete graph Laplacian was introduced by [ZS05] as a form of regularization generalizing classical Laplacian regularization in semisupervised learning [ZGL03]. We mention however that the continuous Lapalcian has been extensively studied much earlier in PDE theory [HKM12, ACJ04]
. It was also used and analyzed for spectral clustering, where it provides a family of relaxations to the normalized cut problem
[Amg03, BH09, LHDN10]. A dual version of the problem (the resistances or volatges problem) was investigated and shown to yield improved classification performance in [BZ13]. [AL11] prove the existence of a phase transition under the geometric graph model roughly similar to the one exhibited in this paper, although the thresholds are slightly different. The exact nature of the connection is still unclear however. On the other hand, a gametheoretic interpretation of the Laplacian solution is studied in [PS08, PSSW09], and a similar transition at in the behavior of the game is found. The assumption that the graph entails a geometric structure is popular in the analysis of semisupervised learning algorithms [BN01, BCH03, BN04, Hei06a, Hei06b, NSZ09]. This line of work have mostly focused on the Laplacian formulation and its convergence properties to a differential operator on the limiting manifold.Among other approaches that circumvent the degeneracy issue discussed in the paper, we mention higher order regularization [ZB11], where instead of only penalizing the first derivative of the function, one can penalize up to derivatives. This approach considers solutions in a higher order Sobolev space which, via the Sobolev embedding theorem, only contains smooth functions if (see [AF03, Leo09]). This approach can be implemented algorithmically using the discrete iterated Laplacian [ZB11, WSST15].
Results on statistical rates for semisupervised learning problems are very sparse. The first results are covered in [CC96] in the context of mixture models, [Rig07] in the context of classification, and [LW07] for regression. A recent line of work considers the setting where the graph is fixed while the set of vertices where the labels are available is random [AZ07, JZ07, JZ08, SB14, SCS15]. The methods studied in this setting generalize the 2Laplacian method by penalizing by the quadratic form given by a general positive semidefinite kernel. The derived rates depend on the structural properties of the graph and/or the kernel used. It is shown in particular that using the normalized Laplacian instead of the regular one leads to a better statistical bound, and on the other hand, one can obtain rates depending on the Lovász Theta function of the graph by choosing the regularization kernel optimally.
7 Conclusion and open problems
In this paper, we used techniques and ideas from PDE theory to yield insight into the behavior of various methods for semisupervised learning on graphs. The dimensional geometric random graph model analyzed in this paper is a common one in the literature, and the most common Laplacian penalization technique is for , though the choice has also attracted some recent attention. Our paper sheds light on both of these options, as well as the full range of
in between. From our asymptotic analysis, we see that for a
dimensional problem, degenerate solutions occur whenever , whereas at the other extreme, the choice leads to solutions that are totally insensitive to the input data distribution. Hence, the choice seems like a prudent one, trading off degeneracy of the solution with sensitivity to the unlabeled data.An important companion problem is the unconstrained version of the problem, in which we penalize a (weighted) sum of two terms, the Laplacian term and a sum of squared losses on the labeled data. One can see that under our asymptotics, we can make the same conclusions about the unconstrained solution. Hence, the conclusions of this paper do not hinge on the fact that we modeled the problem with equality constraints, and do apply more generally. For completeness, we outline this argument in Appendix D.
There are perhaps two important assumptions which must be relaxed in future work. The first is the asymptotics we consider, in which the number of labeled points is fixed as the unlabeled points become infinite. An interesting situation is when both labeled and unlabeled points grow at a relative rate. We showed that in the first situation, a certain class of methods, namely all Laplacian methods with , behave poorly.
An interesting direction is to understand what set of methods are appropriate in different regimes of relative growth rate. In particular, we suspect that most of our results should continue to hold as long as the number of unlabeled points grow at a much faster rate than the number of labeled points. The second assumption is about the geometric random graph model, and how much our results are tied to this model. Finite sample rates and nonasymptotic arguments are necessary to understand how soon we can expect to see these effects on general graphs in practice. Finally, the model selection problem of what to use in practice is very important, since we may not know the underlying dimensionality of the data from which our graph was formed.
Acknowledgments:
We thank Jason Lee, Kevin Jamieson, Anup Rao, Sushant Sachdeva and Ryan Tibshirani for discussions in the early phases of this work. This work was partially supported by Air Force Office of Scientific Research grant FA95501410016 and Office of Naval Research grant DODONRN00014 to MJW. This material is based upon work supported in part by the Office of Naval Research under grant number N000141110688 and by the Army Research Laboratory and the U. S. Army Research Office under grant number W911NF1110391to MIJ.
References
 [ACJ04] Gunnar Aronsson, Michael Crandall, and Petri Juutinen. A tour of the theory of absolutely minimizing functions. Bulletin of the American mathematical society, 41(4):439–505, 2004.
 [AF03] Robert A Adams and John JF Fournier. Sobolev spaces, volume 140. Academic press, 2003.
 [AL11] Morteza Alamgir and Ulrike V Luxburg. Phase transition in the family of resistances. In Advances in Neural Information Processing Systems, pages 379–387, 2011.
 [Amg03] S Amghibech. Eigenvalues of the discrete Laplacian for graphs. Ars Combinatoria, 67:283–302, 2003.
 [AS10] Scott N Armstrong and Charles K Smart. An easy proof of Jensen’s theorem on the uniqueness of infinity harmonic functions. Calculus of Variations and Partial Differential Equations, 37(34):381–384, 2010.
 [AZ07] Rie Kubota Ando and Tong Zhang. Learning on graph with Laplacian regularization. Advances in neural information processing systems, 19:25, 2007.
 [BBM02] Peter L Bartlett, Olivier Bousquet, and Shahar Mendelson. Localized Rademacher complexities. In Computational Learning Theory, pages 44–58. Springer, 2002.
 [BCH03] Olivier Bousquet, Olivier Chapelle, and Matthias Hein. Measure based regularization. In Advances in Neural Information Processing Systems, pages 1221–1228, 2003.
 [BH09] Thomas Bühler and Matthias Hein. Spectral clustering based on the graph Laplacian. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 81–88. ACM, 2009.
 [BN01] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems, volume 14, pages 585–591, 2001.
 [BN04] Mikhail Belkin and Partha Niyogi. Semisupervised learning on Riemannian manifolds. Machine learning, 56(13):209–239, 2004.
 [BO92] RC Brown and B Opic. Embeddings of weighted sobolev spaces into spaces of continuous functions. In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, volume 439, pages 279–296. The Royal Society, 1992.

[BZ13]
Nick Bridle and Xiaojin Zhu.
voltages: Laplacian regularization for semisupervised learning on highdimensional data.
In Eleventh Workshop on Mining and Learning with Graphs (MLG2013), 2013. 
[CC96]
Vittori Castelli and Thomas M Cover.
The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter.
IEEE Transactions on Information Theory, 42(6):2102–2117, 1996.  [CEG01] Michael G Crandall, Lawrence C Evans, and Ronald F Gariepy. Optimal Lipschitz extensions and the infinity Laplacian. Calculus of Variations and Partial Differential Equations, 13(2):123–139, 2001.
 [DLT83] RB Darst, DA Legg, and DW Townsend. The Pólya algorithm in approximation. Journal of Approximation Theory, 38(3):209–220, 1983.

[Dud99]
R. M. Dudley.
Uniform central limit theorems
. Cambridge university press, 1999.  [EH90] Alan Egger and Robert Huotari. Rate of convergence of the discrete Pólya algorithm. Journal of approximation theory, 60(1):24–30, 1990.
 [ET87] AG Egger and GD Taylor. Dependence on of the best approximation operator. Journal of approximation theory, 49(3):274–282, 1987.
 [GF63] IM Gelfand and SV Fomin. Calculus of variations. revised english edition translated and edited by richard a. silverman, 1963.

[Hei06a]
Matthias Hein.
Geometrical aspects of statistical learning theory
. PhD thesis, TU Darmstadt, 2006.  [Hei06b] Matthias Hein. Uniform convergence of adaptive graphbased regularization. In Learning Theory, pages 50–64. Springer, 2006.
 [HKM12] Juha Heinonen, Tero Kilpeläinen, and Olli Martio. Nonlinear potential theory of degenerate elliptic equations. Courier Corporation, 2012.
 [JZ07] Rie Johnson and Tong Zhang. On the effectiveness of Laplacian normalization for graph semisupervised learning. Journal of Machine Learning Research, 8(4), 2007.
 [JZ08] Rie Johnson and Tong Zhang. Graphbased semisupervised learning and spectral kernel design. Information Theory, IEEE Transactions on, 54(1):275–288, 2008.
 [Kol06] Vladimir Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593–2656, 2006.
 [KRSS15] Rasmus Kyng, Anup Rao, Sushant Sachdeva, and Daniel A Spielman. Algorithms for Lipschitz learning on graphs. Proceedings of The 28th Conference on Learning Theory, pages 1190–1223, 2015.
 [Leo09] Giovanni Leoni. A first course in Sobolev spaces, volume 105. American Mathematical Society Providence, RI, 2009.

[LHDN10]
Dijun Luo, Heng Huang, Chris Ding, and Feiping Nie.
On the eigenvectors of
Laplacian. Machine Learning, 81(1):37–51, 2010.  [LT89] David A Legg and Douglas W Townsend. The Pólya algorithm for convex approximation. Journal of mathematical analysis and applications, 141(2):431–441, 1989.
 [LW07] John Lafferty and Larry Wasserman. Statistical analysis of semisupervised regression. In Advances in Neural Information Processing Systems 20, pages 801–808. Curran Associates, Inc., 2007.
 [Men02] Shahar Mendelson. Geometric parameters of kernel machines. In Computational Learning Theory, pages 29–43. Springer, 2002.
 [NSZ09] Boaz Nadler, Nathan Srebro, and Xueyuan Zhou. Semisupervised learning with the graph laplacian: The limit of infinite unlabelled data. In Neural Information Processing Systems, pages 1330–1338, 2009.
 [Obe13] Adam M Oberman. Finite difference methods for the infinity Laplace and Laplace equations. Journal of Computational and Applied Mathematics, 254:65–80, 2013.
 [PS08] Yuval Peres and Scott Sheffield. Tugofwar with noise: A gametheoretic view of the laplacian. Duke Mathematical Journal, 145(1):91–120, 2008.
 [PSSW09] Yuval Peres, Oded Schramm, Scott Sheffield, and David Wilson. Tugofwar and the infinity Laplacian. Journal of the American Mathematical Society, 22(1):167–210, 2009.
 [Rig07] Philippe Rigollet. Generalization error bounds in semisupervised classification under the cluster assumption. Journal of Machine Learning Research, 8:2183–2206, 2007.
 [SB14] Rakesh Shivanna and Chiranjib Bhattacharyya. Learning on graphs using orthonormal representation is statistically consistent. In Advances in Neural Information Processing Systems, pages 3635–3643, 2014.
 [SCS15] Rakesh Shivanna, Bibaswan K Chatterjee, Raman Sankaran, Chiranjib Bhattacharyya, and Francis Bach. Spectral norm regularization of orthonormal representations for graph transduction. In Advances in Neural Information Processing Systems, pages 2206–2214, 2015.
 [Ser09] Robert J Serfling. Approximation theorems of mathematical statistics, volume 162. John Wiley & Sons, 2009.
 [vdG00] Sara van de Geer. Empirical processes in Mestimation. Cambridge University Press Cambridge, 2000.

[WSST15]
YuXiang Wang, James Sharpnack, Alex Smola, and Ryan J Tibshirani.
Trend filtering on graphs.
In
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics,
, page 1042–1050, 2015.  [ZB11] Xueyuan Zhou and Mikhail Belkin. Semisupervised learning by higher order regularization. In International Conference on Artificial Intelligence and Statistics, pages 892–900, 2011.
 [ZGL03] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semisupervised learning using Gaussian fields and harmonic functions. In Proceedings of The 31st International Conference on Machine Learning, volume 3, pages 912–919, 2003.
 [ZS05] Dengyong Zhou and Bernhard Schölkopf. Regularization on discrete spaces. In Pattern Recognition, pages 361–368. Springer, 2005.
Appendix A Proof of Theorem 3.1
Let and be i.i.d. draws from . Since
is a bounded function, the first moment
is finite. Therefore, by the strong law of large numbers for Ustatistics (
[Ser09]), we haveWriting for some scalar and vector , the second integral simplifies to
We now divide by and consider the behavior as the bandwidth parameter tends to zero. Since the functions , and are all bounded on a compact domain, the dominated convergence theorem implies that
Note the above inner product involves tensors of order , and recall that we have assumed that is even. Since the function depends only on the norm of in the above integral, the latter should also be isotropic. The precise statement is as follows:
Lemma A.1.
For any function and vector , we have
(15) 
Applying Lemma A.1 with the function and then simplifying yields
which concludes the proof of the theorem.
The only remaining detail is to prove Lemma A.1.
Proof of Lemma a.1
We proceed by induction on the integer . In the base case (), we have
Since the function depends only on , the matrix between the parentheses above is proportional to the identity, the proportionality constant can be determined by taking a trace. We end up with . This establishes the base case.^{5}^{5}5The case is also proven in Proposition 4.1 of the paper [BCH03] Now assume that for a given even , and for all function nonnegative maps , one has (15). We prove that the same is true for . Define , and for any vector , let be the partial contraction of by , namely
The tensor is of order , and the map is nonnegative, so by the induction hypothesis, for every , we have
By recourse to the base case, the quadratic form between the
parentheses is equal to
Comments
There are no comments yet.