When is there a Representer Theorem? Nondifferentiable Regularisers and Banach spaces

by   Kevin Schlegel, et al.
University of Oxford

We consider a general regularised interpolation problem for learning a parameter vector from data. The well known representer theorem says that under certain conditions on the regulariser there exists a solution in the linear span of the data points. This is the core of kernel methods in machine learning as it makes the problem computationally tractable. Necessary and sufficient conditions for differentiable regularisers on Hilbert spaces to admit a representer theorem have been proved. We extend those results to nondifferentiable regularisers on uniformly convex and uniformly smooth Banach spaces. This gives a (more) complete answer to the question when there is a representer theorem. We then note that for regularised interpolation in fact the solution is determined by the function space alone and independent of the regulariser, making the extension to Banach spaces even more valuable.



page 1

page 2

page 3

page 4


When is there a Representer Theorem? Reflexive Banach spaces

We consider a general regularised interpolation problem for learning a p...

Approximate Representer Theorems in Non-reflexive Banach Spaces

The representer theorem is one of the most important mathematical founda...

A Structural Approach to Coordinate-Free Statistics

We consider the question of learning in general topological vector space...

NIL: Learning Nonlinear Interpolants

Nonlinear interpolants have been shown useful for the verification of pr...

A Reduction Theorem for the Sample Mean in Dynamic Time Warping Spaces

Though the concept of sample mean in dynamic time warping (DTW) spaces i...

The Gromov-Hausdorff distance between ultrametric spaces: its structure and computation

The Gromov-Hausdorff distance (d_GH) provides a natural way of quantifyi...

Uniform Envelopes

In the author's PhD thesis (2019) universal envelopes were introduced as...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Regularisation is often described as a process of adding additional information or using previous knowledge about the solution to solve an ill-posed problem or to prevent an algorithm from overfitting to the given data. This makes it a very important method for learning a function from empirical data from very large classes of functions. Intuitively its purpose is to pick from all the functions that may explain the data the function which is the simplest in some suitable sense. Hence regularisation appears in various disciplines wherever empirical data is produced and has to be explained by a function. This has motivated to study regularisation problems in mathematics, statistics and computer science and in particular in machine learning theory (Cucker and Smale [4], Shawe-Taylor and Cristianini [16], Micchelli and Pontil [13]).
In particular regularisation in Hilbert spaces has been studied in the literature for various reasons. First of all the existence of inner products allows for the design of algorithms with very clear geometric intuitions often based on orthogonal projections or the fact that the inner product can be seen as a kind of similarity measure.
But in fact crucial for the success of regularisation methods in Hilbert spaces is the well known representer theorem which states that for certain regularisers there is always a solution in the linear span of the data points (Kimeldorf and Wahba [8], Cox and O’Sullivan [3], Schölkopf and Smola [17, 14]). This means that the problem reduces to finding a function in a finite dimensional subspace of the original function space which is often infinite dimensional. It is this dimension reduction that makes the problem computationally tractable.
Another reason for Hilbert space regularisation finding a variety of applications is the kernel trick which allows for any algorithm which is formulated in terms of inner products to be modified to yield a new algorithm based on a different symmetric, positive semidefinite kernel leading to learning in reproducing kernel Hilbert spaces (Schölkopf and Smola [15], Shawe-Taylor and Cristianini [16]). This way nonlinearities can be introduced in the otherwise linear setup. Furthermore kernels can be defined on input sets which a priori do not have a mathematical structure by embeddings into a Hilbert space.
When we are speaking of regularisation we are referring to Tikhonov regularisation, i.e. an optimisation problem of the form

where is a Hilbert space, is a set of given input/output data with , is an error function, a regulariser and is aregularisation parameter. Argyriou, Micchelli and Pontil [1] show that under very mild conditions this regularisation problem admits a linear representer theorem if and only if the regularised interpolation problem


admits a linear representer theorem. They argue that we can thus focus on the regularised interpolation problem which is more convenient to study. It is easy to see that their argument holds for the more general setting of the problem which we are going to introduce in this paper so we are going to take the same viewpoint in this paper and consider regularised interpolation.
We will be interested in regularisation not only in Hilbert spaces as stated above but extend the theory to uniformly convex, uniformly smooth Banach spaces, allowing for learning in a much larger variety of spaces. While any two Hilbert spaces of the same dimension are linearly isometrically isomorphic this is far from true for Banach spaces so they exhibit much richer geometric variety which may be exploited in learning algorithms. Furthermore we may encounter applications where the data has some intrinsic structure so that it cannot be embedded into a Hilbert space. Having a large amount of Banach spaces for potential embeddings may help to overcome this problem. Analogous to learning in reproducing kernel Hilbert spaces the generalisation to Banach spaces allows for learning in reproducing kernel Banach spaces which have been introduced by Zhang, Xu and Zhang [18]. Our results regarding the existence of representer theorems are in line with Zhang and Zhang’s work on representer theorems for reproducing kernel Banach spaces [19].
But as we will show at the end of this paper the variety of spaces to pose the problem in is of even greater importance. It is often said that the regulariser favours solutions with a certain desirable property. We will show that in fact for regularised interpolation when we rely on the linear representer theorem it is essentially the choice of the space, and only the choice of the space not the choice of the regulariser, which determines the solution.
It is well known that non-decreasing functions of the Hilbert space norm admit a linear representer theorem. Argyriou, Micchelli and Pontil [1] showed that this condition is not just necessary but for differentiable regularisers also sufficient. In this paper we remove the differentiablity condition and show that any regulariser on a uniformly convex and uniformly smooth Banach space that admits a linear representer theorem is in fact very close to being radially symmetric, thus giving a (more) complete answer to the question when there is a representer theorem. Before presenting those results we present the necessary theory of semi-inner products to generalise the Hilbert space setting considered by Argyriou, Micchelli and Pontil to Banach spaces.
In section 2 we will introduce the notion of semi-inner products as defined by Lumer [11] and later extended by Giles [6]. We will state the results without proofs as they mostly are not difficult and can be found in the original papers. Another extensive reference about semi-inner products and their properties is the work by Dragomir [5].
After introducing the relevant theory we will present the generalised regularised interpolation problem in section 3, replacing the inner product in eq. 1 by a semi-inner product. We then state one of the main results of the paper that regularisers that admit a representer theorem are almost radially symmetric in a way that will be made precise in the statement. Before giving the proof of the theorem we state and prove two essential lemmas capturing most of the important structure of the problem to prove the theorem. We finish the section by giving the proof of the main result.
Finally in section 4 we prove that in fact for admissible regularisers there is a unique solution of the regularised interpolation problem in the linear span of the data and it is independent of the regulariser. This in particular means that we may choose the regulariser which is most suitable for our task at hand without changing the solution.

1.1 Notation

Before the main sections we briefly introduce some notation used throughout the paper. We use as a shorthand notation for the set . We will assume we have data points , where will always denote a uniformly convex, uniformly smooth real Banach space and . Typical examples of are finite sets of integers for classification problems, e.g. for binary classification, or the whole of for regression.
We briefly recall the definitions of a Banach space being uniformly convex and uniformly smooth, further details can be found in [2, 10, 9].

Definition (Uniformly convex Banach space)

A normed vector space is said to be uniformly convex if for every there exists a such that if with and then .

Definition (Uniformly smooth Banach space)

A normed vector space is said to be uniformly smooth if for every
there exists such that if with then .

Remark ()

There are two equivalent conditions of uniform smoothness which we will make use of in this paper.

  1. [label=()]

  2. The modulus of smoothness of the space is defined as


    Now is uniformly smooth if and only if .

  3. The norm on is said to be uniformly Fréchet differentiable if the limit

    exists uniformly for all real and with . The space is uniformly smooth if its norm is uniformly Fréchet differentiable.

We always write to denote a Hilbert space and for the first part of section 2 we will be speaking of general normed linear spaces denoted by . Once we have seen the reasons to require the space to be a uniformly convex and uniformly smooth Banach space the remainder of section 2 and the paper will consider such spaces denoted by . When only the norm on is considered the subscript will often be omitted for simplicity. Throughout we will denote the inner product on a Hilbert space by and a semi-inner product on a normed linear space by .

2 Semi-inner product spaces

There are various definitions of semi-inner products aiming to generalise Hilbert space methods to more general cases. The notion of semi-inner products we are going to use was first introduced by Lumer [11] and further developed by Giles [6]. In comparison to inner products the assumption of (conjugate) symmetry, or equivalently additivity in the second argument, is dropped. This means that we need to assume the Cauchy-Schwarz inequality to make sure that it holds as it is crucial for the semi-inner products to have inner-product like behaviour. In the original definition Lumer did not assume homogeneity in the second argument but Giles argued that one can assume it without any significant restrictions. We will hence be including homogeneity in our assumptions.
An extensive overview of the theory of this and other notions of semi-inner products can be found in Dragomir [5].
In this section only we state all results for real or complex vector spaces as all of them are valid for the complex case. Throught this section we will thus denote the field by . In the subsequent sections where we present the main contributions of this paper we will return to real vector spaces as it is at this point not clear whether the results remain valid for complex vector spaces.

Definition (Semi-inner product)

A semi-inner product (s.i.p.) on a real or complex vector space is a map with the following properties:

  1. [label=(),series=semiinner]

  2. Linearity in the first argument:
    for all and

  3. Positive definiteness:

  4. Cauchy-Schwarz inequality:

  5. (Conjugate) homogeneity in the second argument:
    for all and

With these properties a semi-inner product induces a norm on . Conversely every norm on a linear space is induced by at least one semi-inner product, i.e. there exists at least one semi-inner product such that . This means that every normed linear space is a s.i.p. space. Consequently we say that an s.i.p. space is uniformly convex if the norm induced by is uniformly convex and the s.i.p. space is uniformly smooth if the induced norm is uniformly smooth.
The semi-inner product inducing the norm is not unique in general though. It turns out that we have uniqueness if the norm is differentiable which is closely linked to a weak continuity property in the second argument of the inducing semi-inner product.

Proposition ()

If the norm on is uniformly Fréchet differentiable as defined in section 1.1, then


uniformly for every with as . Furthermore the differential of the norm for is given by

This in particular means that the semi-inner product inducing a uniformly Fréchet differentiable norm is unique.

The existence of a semi-inner product allows us to define a notion of orthogonality analogous to orthogonality in Hilbert spaces by requiring the semi-inner product to be zero. The lack of symmetry of the semi-inner product thus means that our notion of orthogonality is not symmetric in general and normal to does not imply that is normal to .

Definition (Orthogonality)

Let be a s.i.p. space. For we say is normal to if .
A vector is normal to a subspace if is normal to all .

Various generalisations of orthogonality have been developed which are equivalent conditions to the inner product being zero in a Hilbert space but generalise to normed linear spaces. One of these notions of orthogonality is James orthogonality [7]

. The equivalence of James orthogonality with the inner product being zero in a Hilbert space generalises to smooth Banach spaces in which James orthogonality is equivalent to the unique semi-inner product being zero. James states that his definition is closely related to linear functionals and hyperplanes which is essential for our applications as we will see in the main part of the paper.

Proposition (James orthogonality)

In a uniformly smooth s.i.p. space semi-inner product orthogonality is equivalent to James orthogonality, namely for

This relation to James orthogonality also helps to get a geometric understanding of what orthogonality means in a s.i.p. space. From section 2 it is immediately clear that being normal to means that the vector is tangent to the ball at the point , where is the ball of radius centred at the origin.
Having defined what it means to be orthogonal to a linear subspace we can also define the orthogonal complement of a subspace. It will become clear later that this definition coincides with the usual definition of orthogonal complements in Banach spaces via the dual space.

Definition (Orthogonal Complement)

Let be a s.i.p. space and a closed linear subspace. Then the orthogonal complement of is defined to be

If the space is a uniformly convex Banach Space it is not difficult to see that there is a unique orthogonal decomposition for every . This is because it is known that in a uniformly convex space there is a unique closest point in a closed linear subspace and one easily checks that this immediately leads to a unique orthogonal decomposition.

Proposition (Orthogonal Decomposition)

Let be a uniformly convex s.i.p. space. Then for any closed linear subspace there exists a unique orthogonal decomposition, more precisely for any there exists a unique and a unique such that .

Under these assumptions we are also able to establish a Riesz representation theorem using the semi-inner product.

Theorem 2.1 (Riesz representation theorem)

Let be a uniformly convex, uniformly smooth s.i.p. space. Then for every , the continuous dual space of , there exists a unique vector such that


This theorem is crucial for the development of the theory in this paper as it means that the duality map given by

is an isometric isomorphism from to . It is essential to note that this map is linear if and only if is a Hilbert space.
Summarizing the above results we see that a necessary structure to have a unique semi-inner product inducing the norm and allowing for a Riesz representation theorem is that the space is a uniformly convex and uniformly Fréchet differentiable Banach space. For simplicity we will be calling such spaces uniform.

Definition (Uniform Banach space)

We say a space is uniform if it is a uniformly convex and uniformly Fréchet differentiable Banach space.

For the remainder of the paper we will only be working with uniform Banach spaces and throughout denote them by .
Note that any Banach space that is uniformly convex or uniformly Fréchet differentiable is reflexive. Further a Banach space is uniformly Fréchet differentiable if and only if its dual space is uniformly convex. Thus for a uniform Banach space its dual space is also uniform and its norm-inducing semi-inner product is given by

We already know that the duality map is a homogeneous isometric isomorphism. Lastly we note that in fact it is also norm-to-norm continuous.The proof for this is standard and can be found in the appendix.

Proposition ()

The duality map is norm-to-norm continuous.
In particular this shows that in fact eq. 3 can be strengthened to

for all and .

Thus the dual map is a homeomorphism from to with the norm topologies.

3 Existence of Representer Theorems

The definitions and results of the previous section allow us to consider the regularised interpolation problem


where the domain of the interpolation problem is a real uniform Banach space. This generalises the setting considered by Argyriou, Micchelli and Pontil in [1] where the case of a Hilbert space domain is considered. In that setting the linear representer theorem states that there exists a solution to the interpolation problem which is in the linear span of the data points. Our work, similarly as [12], hints that in its essence the representer theorem is a result about the dual space rather than the space itself. Since in a Hilbert space the dual element is the element itself this doesn’t become apparent in this setting and we obtain a result in the space itself. As the duality map is nonlinear for any Banach space which is not Hilbert we need to adjust the formulation of the representer theorem. Namely the linear representer theorem in a uniform Banach space states that there exists a solution such that its dual element is in the linear span of the dual elements of the data points. This is made precise in the following definition which is the analogue of Argyriou, Micchelli and Pontil calling regularisers which always admit a linear representer theorem admissible.

Definition (Admissible Regulariser)

We say a function is admissible if for any and any given data such that the interpolation constraints can be satisfied the regularised interpolation problem eq. 4 admits a solution such that its dual element is of the form

With this definition at hand it is now our goal to classify all admissible regularisers. It is well known that being a non-decreasing function of the norm on a Hilbert space is a sufficient condition for the regulariser to be admissible. By a Hahn-Banach argument similar as e.g. in Zhang, Zhang 

[19] this generalises to our case of uniform Banach spaces. Below we show that this condition is already almost necessary in the sense that admissible regularisers cannot be very far from being radially symmetric.

Theorem 3.1 ()

A function is admissible if and only if it is of the form

for some non-decreasing whenever for . Here is an at most countable set of radii where

has a jump discontinuity. For any

with the value is only constrained by the monotonicity property, i.e. it has to lie in between and .
In other words, is radially non-decreasing and radially symmetric except for at most countably many circular jump discontinuities. In those discontinuities the function value is only limited by its monotonicity property.

In [1] Argyriou, Micchelli and Pontil show that any admissible regulariser on a Hilbert space is non-decreasing in orthogonal directions. An analogous result is true for uniform Banach spaces but with orthogonality not being symmetric and our intuition gained from the equivalence with James orthogonality we see that in fact it is tangential directions in which the regulariser is non-decreasing. This also becomes clear from the proves in [1], in particular when proving radial symmetry.
Before we can prove the analogous result for uniform Banach spaces we need to show that we can extend this tangential bound considerably and a function that is non-decreasing in tangential directions is in fact non-decreasing in norm as is made precise in the following section.

Lemma ()

If for all such that then for any fixed we have that for all such that .


Part 1: (Bound on the half space given by the tangent through )

We start by showing that is radially non-decreasing. Since it is non-decreasing along tangential directions this immediately gives the claimed bound for the entire half space given by the tangent through . The idea of the proof is to move out along a tangent until we can move back along another tangent to hit a given point along the ray as shown in fig. 1.

Figure 1: We can extend the tangential bound to the ray by finding the point along the tangent from where the tangent to hits the desired point on the ray. Via the tangents to points along the ray the bound then extends to the shaded half space.

Fix some and and set . We need to show that . Let be such that or equivalently for all . Now let

so that . Note that by strict convexity and continuity of the norm is continuous and strictly increasing in .
Now since is the tangent through and points from to , for small for which we must have that


On the other hand for big enough so that we thus must have


But we know that

and since the dual map is norm-to-norm continuous is clearly continuous in . By above discussion the expression is positive for small and negative for large so by the intermediate value theorem there exists such that

so that indeed and thus is tangential to . But this means that as claimed.
Hence we have the bound along the entire ray for which extends along all tangents through those points to the half space given by the tangent through , i.e. the shaded region in fig. 1.

Part 2: (Extend the bound around the circle)

Next we note that we can actually extend the bound further to apply all the way around the circle, namely for all such that . This is done by considering as before but then instead of following the tangent into the half space just considered we follow the tangent in the opposite direction around the circle, as shown in fig. 1(a). We fix another point along that tangent and repeat the process, moving around the circle. We claim that by making the step size along each tangent small enough we can this way move around the circle while staying arbitrarily close to it.
More precisely we need to show that the distance a step along a tangent takes us away from the circle decreases faster than the step along the tangent so that we move considerably further around the circle than away from it with each step, as shown in fig. 1(b).

(a) By repeatedly taking steps along tangents we can move all the way around the circle.
(b) When decreasing the step size along a tangent the step size away from the circle decreases significantly faster so that by making the steps along tangents small enough we can reach any point arbitrarily close to the circle.

As stated in eq. 2 let

be the modulus of smoothness of the space . For such that
, , we have that for all so in particular . We thus easily see that

This means that for a step of order along a tangent, i.e.  of length , we take a step of order away from the circle. But since is uniformly smooth we have that as proving that for small enough indeed the step away from the circle is significantly smaller than the step along the tangent as shown in fig. 1(b).
Combining both arguments this proves that we can reach any point with norm greater than from only by moving along tangents giving the claimed bound.

Having proved this lemma we are now in the position to prove that indeed any admissible regulariser on a uniform Banach space is non-decreasing in tangential directions. Note that the previous lemma will also play a crucial role in removing the differentiability assumption when establishing the closed form representation of the regulariser in theorem 3.1.

Lemma ()

A function is admissible if and only if for every such that we have

if and only if for any fixed and all such that we have


Part 1: ( admissible nondecreasing along tangential directions)

Fix any and consider the regularised interpolation problem

As is assumed to be admissible there exists a solution with dual element in which by homogeneity of the dual map clearly is itself. But if is such that then so also satisfies the constraints and hence necessarily as claimed. The second claim follows immediately from section 3.

Part 2: (Nondecreasing along tangential directions admissible)

Conversely fix any data such that the interpolation constraints can be satisfied. Let be a solution to the regularised interpolation problem. If we are done so assume it is not. We let

Further denote by the space corresponding to the orthogonal complement of i.e.

Thus and by assumption and so also .
Now by definition we have that

so the codimension of is . Without loss of generality we can assume that not all are zero as otherwise is a trivial solution in the span of the data points. Since not all are zero and thus . But since and the dual map is a homeomorphism is homeomorphic to a linear space of dimension . This means that that is homeomorphic to a one-dimensional space and hence in particular contains a nonzero element.
Now fix such . As we noted earlier being nonzero means that and . Thus for . By homogeneity of the dual map and so

and thus


with .
This means we have constructed an with dual element in the span of the data points and which means by definition of that satisfies the interpolation constraints. It remains to show that in fact is in norm at most as large as .
To this end note that for all by definition for all and hence we see that for we get that

But by the equivalence with James orthogonality this means that
for all or equivalently

In particular .
But by section 3 we know that for a function which is non-decreasing along tangential directions is non-decreasing in norm so implies that and so we have found a solution with dual element in the span of the data points as claimed.

Using those two results we can now give the proof that admissible regularisers are almost radially symmetric in the sense of theorem 3.1.

Proof (Of theorem 3.1):

Part 1: ( continuous in radial direction implies radially symmetric)

We now show that instead of differentiability, the assumption that is continuous in radial direction is sufficient to conclude that it has to be radially symmetric. We prove this by contradiction. Assume is admissible but not radially symmetric. Then there exists a radius so that is not constant on the circle with radius and hence there are two points and so that, without loss of generality, .
But then by section 3 for all we have and thus as non-negative and non-decreasing contradicting radial continuity of . Hence has to be constant along every circle as claimed.

Part 2: (Radial mollification preserves being nondecreasing in tangential directions)

The observation in part 1 is useful as we can easily radially mollify a given so that the property of being non-decreasing along tangential directions is preserved.
Indeed let be a mollifier such that with support in and for each ray given by some of unit norm, define the mollified regulariser by

We thus obtain a radially mollified regulariser on given by

We check that this function is still non-decreasing along tangential directions, i.e. we need to show that for s.t. we still have


Note that by section 3 we have that for all if for all . But this is clear as it is equivalent to . As is non-positive we can drop the modulus to obtain that this happens if which is just James orthogonality and thus follows from the fact that

. This proves that the integral estimate

eq. 8 indeed holds and hence the radially mollified is indeed non-decreasing in tangential directions.

Part 3: ( is as claimed)

Putting these two observations together we obtain the result. By part 2 is of the form for some continuous, non-decreasing . But if we consider along any two distinct, fixed directions given by , , as then the mollifications of both and must equal so almost everywhere. Further by continuity of they can only differ in points of discontinuity of and . As each is a monotone function on the positive real line it can only have countably many points of discontinuity. Clearly as above bounds are only making statements about values outside a given circle and is itself monotone, each is free to attain any value within the monotonicity constraint in those points of discontinuity. This shows that is of the claimed form.

Remark ()

We see that everything we say about in this section relies crucially on the observation that it being admissible is a statement about its behaviour along tangents as stated in section 3. But there is in fact no tangent into the complex plane, i.e. for fixed there is no tangent that intersects the ray for any . Likewise it is not possible to reach any point along said ray via an “out and back” argument as in part 1 of the proof of section 3. For this reason it is currently not clear whether one can say anything about the situation in complex vector spaces.

4 The solution is determined by the space

First of all, while it has been known that for regularisers which are a strictly increasing function of the norm every solution is within the linear span of the data, the proofs in section 3 show immediately that something stronger can be said. For a regularised interpolation problem with an admissible regulariser to have a solution which is not in the linear span of the data the regulariser must have a flat region and the solution then has to lie within the flat region.
But there is more to be said, in fact it turns out that for admissible regularisers the set of solutions in the linear span is independent of the regulariser.
In [12] Micchelli and Pontil consider the minimal norm interpolation problem

where is a Banach space and are continuous linear functionals on . Hence this agrees with eq. 4 for i.e. and a uniformly convex, uniformly smooth Banach space, giving the minimal norm interpolation problem


This leads to the following result.

Theorem 4.1 ()

Let be admissible. Then any which is such that is a solution of eq. 4 if and only if it is a solution of eq. 9.

The proof of this result relies on the following result which was proved by Micchelli and Pontil in [12].

Proposition (Theorem 1 in [12])

is a solution of eq. 9 if and only if it satisfies the constraints
and there is a linear combination of the continuous linear functionals defining the problem which peaks at , i.e. there exists such that

Using this result it is easy to proof theorem 4.1.

Proof (Of theorem 4.1):

Part 1: (A solution of eq. 4 is a solution of eq. 9)

Assume that is a solution of eq. 4 such that . Then trivially satisfies the interpolation constraints and by definition

so , which is a linear combination of the continuous linear problems defining the problem, peaks at . Thus by section 4 is a solution of eq. 9.

Part 2: (A solution of eq. 9 is a solution of eq. 4)

Assume is a solution of eq. 9. Then by section 4 there exists
such that the functional peaks at , i.e.

But then for any we have that

where the last inequality is strict because peaks at and by strict convexity it peaks at a unique point. But this inequality shows that

for all and thus as is admissible also

and is a solution of eq. 4.

This result shows that any admissible regulariser on a uniformly convex and uniformly smooth Banach space has a unique solution in the linear span of the data and the solution is the same for every admissible regulariser. This in particular means that it is the choice of the function space, and only the choice of the space, which determines the solution of the problem. We are thus free to work with whichever regulariser is most convenient in application. Computationally in many cases this is likely going to be , for theoretical results other regularisers may be more suitable, such as in the afore mentioned paper [12] which heavily relies on a duality between the norm of the space and its continuous linear functionals.


  • [1] Argyriou, A., Micchelli, C. A., and Pontil, M. When is there a representer theorem? vector versus matrix regularizers. Journal of Machine Learning Research 10 (2009), 2507–2529.
  • [2] Brezis, H.

    Functional Analysis, Sobolev Spaces and Partial Differential Equations

    , 1 ed.
    Universitext. Springer-Verlag New York, 2011.
  • [3] Cox, D. D., and O’Sullivan, F. Asymptotic analysis of penalized likelihood and related estimators. Ann. Statist. 18, 4 (12 1990), 1676–1695.
  • [4] Cucker, F., and Smale, S. On the mathematical foundations of learning. Bulletin of the American Mathematical Society 39, 1 (2001), 1–49.
  • [5] Dragomir, S. S. Semi-inner Products and Applications. Nova Science Publishers, 2004.
  • [6] Giles, J. R. Classes of semi-inner-product spaces. Transactions of the American Mathematical Society 129, 3 (1967), 436–446.
  • [7] James, R. C. Orthogonality and linear functionals in normed linear spaces. Transactions of the American Mathematical Society 61, 2 (1947), 265–292.
  • [8] Kimeldorf, G., and Wahba, G. Some results on tchebycheffian spline functions. Journal of Mathematical Analysis and Applications 33, 1 (1971), 82 – 95.
  • [9] Köthe, G. Topological Vectorspaces I, vol. 159 of Grundlehren der mathematischen Wissenschaften. Springer-Verlag Berlin Heidelberg, 1983.
  • [10] Lindenstrauss, J., and Tzafriri, L. Classical Banach Spaces II: Function Spaces, vol. 97 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer-Verlag Berlin Heidelberg, 1979.
  • [11] Lumer, G. Semi-inner-product spaces. Transactions of the American Mathematical Society 100, 1 (1961), 29–43.
  • [12] Micchelli, C. A., and Pontil, M. A function representation for learning in banach spaces. In Learning Theory (Berlin, Heidelberg, 2004), J. Shawe-Taylor and Y. Singer, Eds., Springer Berlin Heidelberg, pp. 255–269.
  • [13] Micchelli, C. A., and Pontil, M. Learning the kernel function via regularization. Journal of Machine Learning Research 6 (Jul 2005), 1099–1125.
  • [14] Schölkopf, B., Herbrich, R., and Smola, A. J. A generalized representer theorem. In Computational Learning Theory (Berlin, Heidelberg, 2001), D. Helmbold and B. Williamson, Eds., Springer Berlin Heidelberg, pp. 416–426.
  • [15] Schölkopf, B., and Smola, A. J. Learning with Kernels. MIT Press, 2002.
  • [16] Shawe-Taylor, J., and Cristianini, N. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
  • [17] Smola, J. A., and Schölkopf, B.

    On a kernel-based method for pattern recognition, regression, approximation, and operator inversion.

    Algorithmica 22, 1 (1998), 211–231.
  • [18] Zhang, H., Xu, Y., and Zhang, J. reproducing kernel banach spaces for machine learning. Journal of Machine Learning Research 10 (Dec 2009), 2741–2775.
  • [19] Zhang, H., and Zhang, J. Regularized learning in banach spaces as an optimization problem: representer theorems. Journal of Global Optimization 54, 2 (2012), 235–250.

Appendix A Appendix

Proof (Of section 2):
We begin by showing norm-to-weak continuity and subsequently extend it to norm-to-norm continuity.
Since is reflexive the weak and weak topologies on coincide, so we need to show that if in norm then for all .
Now as the sequence is bounded so it has a weakly convergent subsequence . By [2] proposition 3.13 (iv) we then have

But and so . By [2] proposition 3.13 (iii) we further know that . By strict convexity there is a unique element with those two properties and hence .
Note that this means that for any subsequence there exists a further subsequence converging to a unique limit. This means that in fact the entire sequence converges to this unique limit. Hence indeed as claimed.
Having established norm-to-weak continuity one can easily extend it to norm-to-norm continuity using [2] proposition 3.32. Since all the assumptions of proposition 3.32 in [2] are satisfied and so indeed in norm. ❑