# Piecewise Convex Function Estimation: Representations, Duality and Model Selection

We consider spline estimates which preserve prescribed piecewise convex properties of the unknown function. A robust version of the penalized likelihood is given and shown to correspond to a variable halfwidth kernel smoother where the halfwidth adaptively decreases in regions of rapid change of the unknown function. When the convexity change points are prescribed, we derive representation results and smoothness properties of the estimates. A dual formulation is given which reduces the estimate is reduced to a finite dimensional convex optimization in the dual space.

## Authors

• 25 publications
03/11/2018

### Piecewise Convex Function Estimation and Model Selection

Given noisy data, function estimation is considered when the unknown fun...
03/11/2018

### Piecewise Convex Function Estimation: Pilot Estimators

Given noisy data, function estimation is considered when the unknown fun...
11/17/2017

### Joint Structural Break Detection and Parameter Estimation in High-Dimensional Non-Stationary VAR Models

Assuming stationarity is unrealistic in many time series applications. A...
05/13/2021

### Asymptotic Properties of Penalized Spline Estimators in Concave Extended Linear Models: Rates of Convergence

This paper develops a general theory on rates of convergence of penalize...
12/14/2021

### Convex transport potential selection with semi-dual criterion

Over the past few years, numerous computational models have been develop...
10/11/2019

### Nonsmooth Convex Joint Estimation of Local Regularity and Local Variance for Fractal Texture Segmentation

Fractal models are widely used to describe realworld textures in numerou...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A common problem in nonparametric function estimation is that the estimate often has artificial wiggles that the original function does not possess. In practice, these extra inflection points have a very negative impact on the utility and credibility of the estimate. In this article, we examine function estimation which preserves the geometric shape of the unknown function, . In other words, the number and location of the change points of convexity of the estimate, , should approximate those of .

We say that has change points of -convexity with change points if for . For , is nonnegative and for , the function is nondecreasing. In regions where the constraint of -convexity is active, and is a polynomial of degree . For 1-convexity, is constant in the active constraint regions and for 2-convexity, the function is linear. Our subjective belief is that most people prefer smoothly varying functions such as quadratic or cubic polynomials even in the active constraint regions. Thus, piecewise 3-convexity or 4-convexity are also reasonable hypotheses.

When the change points are prescribed, convex analysis can be employed to derive representation theorems, smoothness properties and duality results. Our work extends that of Refs. [2, 5, 7, 8, 9, 11, 12] to an important class of robust functionals. To motivate robust nonparametric estimation, we show that robust functionals correspond to variable halfwidth data-adaptive kernels, i.e. the effective halfwidth decreases in regions where the unknown function changes rapidly. When the number of change points are known, we prove the existence of a minimum of the penalized likelihood.

Sections 2 and 3 contain functional analysis preliminaries. Lemma 3.1 gives a characterization of negative polar cones in the spirit of [2]. Representation and duality theorems for constrained smoothing splines have been developed in [8, 9, 11] for the case of prescribed convexity change points. In Sections 4 and 6, we generalize these results to the case of nonquadratic penalty functions. In Section 5, we show how robust penalty functions correspond to data-adaptive variable halfwidth kernel smoothers. In Section 7, we consider estimating the change point locations by minimizing the penalized least squares fit.

## 2 Functional Analysis Preliminaries

We consider an unknown function, , is in the Sobolev space with and , where

 Wm,p≡ {f|f(m)∈Lp[0,1] and f,f′…f(m−1)(t)  absolutely continuous} . (2.1)

For , we have the representation

 f(t)=m−1∑j=0ajPj(t)+∫t0(t−s)m−1(m−1)!f(m)(s)ds , (2.2)

where . Equation (2.2) decomposes into a direct sum of the space of polynomials of degree plus the set of functions whose first derivatives vanish at , which we denote by [13]. We define the seminorm . We endow with the norm:

 |∥f∥|pm,p=m−1∑j=0|f(j)(t=0)|p+∥f∥pm,p . (2.3)

The dual space of is isomorphic to the direct sum of and with and the duality pairing of and is

 ⟨⟨g,f⟩⟩=m−1∑j=0bjaj+∫10f(m)(t)g(m)(t)dt , (2.4)

where and . We denote the duality pairing by and the inner product by . In [13], is given a reproducing kernel where for each , . The same reproducing kernel structure carries over to with

 Rt(s)=m−1∑j=0Pj(t)Pj(s)+∫min{t,s}0(t−u)m−1(s−u)m−1du[(m−1)!]2 . (2.5)

A linear operator has representations and , where and . denotes acting on the first entry of . In the standard case, where , and .

## 3 Convex Cone Constraints

In this and the next section, we assume that the change points of -convexity are given and that the unknown function is in the Sobolev space, . Given change points, , we define the closed convex cone

 VK,ℓm,p[x1,…,xK]={f∈Wm,p | (−1)k−1f(ℓ)(t)≥0  for  xk−1≤t≤xk} , (3.1)

where and . Let denote the

row vector,

. Throughout this article, we require . By the Sobolev embedding theorem, is continuous for . For , we require the convexity constraint in (3.1) almost everywhere. We define the class of functions with at most change points as

 VK,ℓm,p≡⋃x1≤x2…≤xK{VK,ℓm,p[x1,…,xK]∪(−VK,ℓm,p[x1,…,xK])} . (3.2)

By allowing , we have embedded into . is the union of convex cones, and is closed but not convex. Similar piecewise -convex classes are defined in [6] for the case with a supremum norm on the Hölder constant for .

For Theorem 6.1, we need the following results from convex analysis.

Definition [1] Let be a closed convex cone in ; the negative polar, , of is .

For , we are able to give a more explicit characterization of the negative polar. Our result is restricted to while Deutsch et al. [2] consider the more difficult case of with .

###### Lemma 3.1

The negative polar of is closure in of the , where is defined by for and .

Proof. Integration by parts yields for . We now find test functions, which require each term separately to be nonpositive. For , we choose where is a small localization parameter and for and zero otherwise. The boundary conditions at are proved inductively with the sequence of test functions as .

###### Corollary 3.2

The negative polar of is .

###### Corollary 3.3

The negative polar of is the closure in of the .

###### Corollary 3.4

The negative polar of is closure in of the

The negative polar is useful in evaluating the normal cone of :

###### Lemma 3.5 ([1], p.171)

Let be a closed convex cone in W, the normal cone of in at , satisfies , where is the negative polar of .

## 4 Robust splines: Representations and Smoothness

In this section, we generalize representation and smoothness results to a large class of robust functionals. These robust functionals are advantageous because they downweight outliers and adaptively adjust the effective smoothing width. We are given

measurements of the unknown function, :

 yi=Lif+ϵi=⟨hi,f⟩+ϵi=⟨⟨bi,f⟩⟩+ϵi , (4.1)

where the are bounded linear operators on , and the

are uncorrelated random variables with variance

. A robustified estimate of given the measurements is :

 VP[f]≡λp∫|f(m)(s)|pds+N∑i=1ρi(⟨hi,f⟩−yi) , (4.2)

where the are strictly convex, continuous functions. The standard case is and . For an excellent discussion of the advantages of robustness in function estimation, see Mächler [5].

Theorem 4.1 is proven in [11] and Theorem 6.1 is proven in [8] for the case and . For the unconstrained case of Theorem 4.1, see [5]. Equation (2.5) and the corresponding smoothness results appear in [11] for the case , and . The set of separate polynomials of degree means that , implies .

###### Theorem 4.1

Let separate polynomials of degree ; then the minimization problem (4.2) has an unique solution in , and the minimizing function satisfies the differential equation:

 (−1)mλdm[|^f(m)|p−2^f(m)(t)]+N∑i=1ρ′i(⟨hi,^f⟩−yi)hi(t)=0 , (4.3)

in those regions where for and .

Proof. The functional (4.2) is strictly convex, lower semicontinuous and coercive, so by Theorem 2.1.2 of Ekeland and Temam [3], it has a unique minimum, , on any closed convex set. From the generalized calculus of convex analysis, the solution satisfies

 0∈(−1)mλdm[|^f(m)|p−2^f(m)(t)] + N∑i=1ρ′i(⟨hi,^f⟩−yi)hi(t)+∂NV(f) , (4.4)

where is the normal cone of at [1, p. 189]. The normal cone is characterized by Lemmas 3.1 and 3.5. From [11], each element of is the limit of a discrete sum: where the are in the active constraint region. Integrating (2.4) yields

 λ|^f(m)|p−2^f(m)(t) =N∑i=1ρ′i(⟨hi,^f⟩−yi)⟨hi(s),(s−t)m−1+⟩(m−1)! (4.5) +∫(s−t)m−ℓ−1+dμ(s)(m−ℓ−1)! ,

where corresponds to a particular element of .

For and , Theorem 4.1

can be derived as a consequence of the corresponding result for constrained interpolation

[7].

###### Corollary 4.2

If are in , then the minimizing function of (4.2) is in .

Proof. Since is times differentiable, the first term on the right hand side of (4.5) is times differentiable. By hypothesis, and thus is in . Integrating (4.5) yields .

## 5 Equivalent adaptive halfwidth of robust splines

Replacing the standard spline likelihood functional ( and ) in (4.2) with a more robust version has several well-known advantages. First, outliers are less influential when downweights large values of the residual error. Second, for , the set of candidate functions are increased, and the solution may have sharper local variation than in the case. We now describe a third important advantage: the effective halfwidth adapts to the unknown function.

In [10], it is shown that as the number of measurements increase the spline estimate converges to a local kernel smoother estimate (provided the measurement times, , are nearly regularly spaced.) For technical details, see [10]. Convergence proofs are available only for . The resulting effective kernel halfwidth, , is scales as , where is the limiting distribution of measurement points.

For , no theory exists on the effective halfwidth of a robust spline estimate. We assume that converges to in under hypotheses similar to those used for the case in [10]. These conditions relate to the discrepancy of the measurement times, , the smoothness of , and the scaling of the smoothing parameter with . The appropriate modifications for are unknown.

We can make a heuristic two-scale analysis of (

4.4) in the continuum. We assume that in the continuum limit, the estimate satisfies the following equation to zeroth order:

 (−1)mλdm[|^f(m)|p−2^f(m)(t)] + ^f(t)=y(t) , (5.1)

where , with

being a white noise process. Away from the

-convexity change points, we linearize 5.1 about . Let be the linearized variable for 5.1: , where satisfies

 (−1)m(p−1)λdm[|f(m)|p−2~f(m)(t)] + ~f(t)=Z(t) . (5.2)

When is small but nonzero, the homogeneous equation may be solved using the Wenzel-Kramer-Brillioun expansion. The resulting Green’s function for may be recasted as a kernel smoother with an effective halfwidth:

 heff(t)∼[λF′(t)|f(m)(t)|p−2]12m . (5.3)

For , the effective halfwidth of the robustified function automatically reduces the halfwidth in regions of large just like a variable halfwidth smoother. We caution that this result has not been rigorously derived.

For the equivalent kernel, the bias error scales as while the variance is proportional to . The halfwidth that minimizes the mean square error scales as The two halfwidths agree at , but is ill-conditioned.

We recognize that this derivation is formal, but we believe that a rigorous multiple scale analysis may prove 5.3. Our purpose is only to motivate the connection between robust splines and adaptive halfwidth kernel smoothers.

## 6 Constrained smoothing splines and duality

In (4.4), the intervals on which vanishes are unknowns and need to be found as part of the optimization. Using the differential characterization (4.2) loses the convexity properties of the underlying functional. For this reason, extremizing the dual functional is now preferred.

###### Theorem 6.1 (Convex Duality)

The dual variational problem of Theorem 4.1 is: Minimize over

 VP∗[α;x]≡λ1−qq∫|[Px∗Bα](m)(s)|qds+N∑i=1ρ∗i(αi)−αiyi , (6.1)

where is the Fenchel/Legendre transform of , and with . The dual projection is defined as

 ∫|[Px∗g](m)(s)|qds≡inf~g∈V−∫10|g(m)−~g(m)(s)|q ds , (6.2)

subject to the constraints . The dual problem is strictly convex, and its minimum is the negative of the infimum of (4.2). When the are linearly independent, the minimum satisfies the differential conditions:

 αi=ρ′i(⟨hi,^f⟩−yi),  and  ⟨hi,f⟩−yi=ρ∗′(αi),     i=1…N . (6.3)

Proof. Let be the indicator function of and define

 U(f)=λp∫10|f(m)(s)|pds+χV(f) . (6.4)

We claim that the Legendre transform of is (6.2). Note that , the indicator function of the dual cone . The Legendre transform of the first term in (6.4) is

 V∗1(g)=λ1−qq∫10|g(m)(s)|qds  for  g∈W0m,q,   and  ∞  otherwise. (6.5)

Our claim follows from . The remainder of the theorem including the differential conditions (6.3) follows from the general duality theorem of Aubin and Ekeland [1, p. 221].

An alternative formulation of the duality result for quadratic smoothing problems is given in [9]. For both theories, the case is difficult to evaluate in practice because the minimization in (6.2) can only rarely be reduced to an explicit finite dimensional problem. Only a few partial results are known when [2, 8, 9]. For the case , the minimization over the dual cone can be done explicitly and yields the following simplification:

###### Corollary 6.2

For , the dual projection, , is a local operator with if and zero otherwise. Thus the minimization of (6.2) is finite dimensional.

## 7 Change point estimation

When the number of change points is fixed, but the locations are unknown, we can estimate them by minimizing the functional in (2.3) with respect to the change point locations. We now show that there exists a set of minimizing change points.

###### Theorem 7.1

For each with , there exist change points which minimize the variational problem (2.3).

Proof. We use the dual variational problem (2.5) and maximize over after minimizing over the . We need only consider in the compact region . For , explicit construction of the functional (2.5) shows that it is jointly continuous in and . Since (2.5) is convex in , Theorem 7.1 follows from the min-max theorem [1, p. 296].

We conjecture that Theorem 7.1 is true for , but we lack a proof that Eq. (6.2) is continuous with respect to for . The change point locations need not be unique. The proof requires instead of in the ordering to make the change point space compact. When , the number of effective change points is less than .

In [6], Mammen considers the case where is known but the locations are unknown. The function is estimated using simple least squares on a class of functions roughly analogous to . Mammen proves that this estimate achieves the optimal convergence rate of for the mean integrated square error. Unfortunately, Mammen’s estimator is not unique and often results in aestetically unappealing fits.

For both formulations. finding the optimal change points locations is computationally intensive. For each candidate set of change points, the the likelihood function needs to be minimized subject to PC constraints. One advantage of our formulation is that for each candidate value of , the programming problem is strictly convex in the dual. This strict convexity is lost if one uses a penalty functional with as in [6] or corresponding to a total variation norm. If the total variation norm is used and an absolute value penalty function is employed (

), the programming problem reduces to constrained linear programming.

## 8 Discussion

We have considered robust smoothing splines under piecewise convex constraints. We generalize the standard representation and smoothness results to nonlinear splines using convex analysis. When the same derivative is both constrained and penalized (), the dual problem is finite dimensional.

We have sketched a derivation of the effective halfwidth of a robust spline. By robustifying the functional, the effective halfwidth (5.3) for the equivalent kernel smoother scales as . The halfwidth that minimizes the mean square error scales as . Thus robust splines adjust the halfwidth, but not as much as the asymptotically optimal local halfwidth would. When the number of convexity change points is known, their locations may be estimated by minimizing the penalized likelihood. For , we have existence, but not necessarily uniqueness.

Acknowledgments: Work funded by U.S. Dept. of Energy Grant DE-FG02-86ER53223.

## References

• [1] J.-P. Aubin, and I. Ekeland, “Applied Nonlinear Analysis”, John Wiley, New York 1984.
• [2] F. Deutsch, V. A. Ubhaya, and Y. Xu, Dual cones, constrained n-convex Lp approximation and perfect splines, J. Approx. Th. 80 (1995), 180-203.
• [3] I. Ekeland and R. Temam, “Convex Analysis and Variational Problems”, North Holland, Amsterdam 1976.
• [4] W. Li, D. Naik, and J. Swetits, A data smoothing technique for piecewise convex/concave curves, SIAM J. Sci. Comp 17 517-537 (1996).
• [5] M. Mächler, Variational Solution of Penalized Likelihood Problems and Smooth Curve Estimation, Ann. Stat., 23 (1996), 1496-1517.
• [6] E. Mammen, Nonparametric regression under qualitative smoothness assumptions, Ann. Stat., 19 (1991), 741-759.
• [7] C. A. Michelli, P. W. Smith, J. Swettis, J. Ward, Constrained Approximation, Construct. Approx. 1 (1985), 93-102.
• [8] C. A. Michelli and F. Utreras, Smoothing and interpolation in a convex set of Hilbert space, SIAM J. Stat. Sci. Comp. 9 (1985), 728-746.
• [9] C. A. Michelli and F. Utreras, Smoothing and interpolation in a convex set of a Hilbert space: II, the semi-norm case, Math. Model. & Num. Anal. 25 (1991), 425-440.
• [10] B. W. Silverman, Spline smoothing: the equivalent variable kernel method, Ann. Stat. 12 (1984), 898-916.
• [11] F. Utreras, Smoothing noisy data under monotonicity constraints - Existence, characterization and convergence rates, Numerische Math. 47 (1985), 611-625.
• [12]

M. Villalobos and G. Wahba, Inequality-constrained multivariate smoothing splines with application to the estimation of posterior probabilities,

J. Amer. Stat. Assoc., 82, (1987), 239-248.
• [13] G. Wahba, Spline Models for Observational Data, SIAM, Philadelphia, PA 1991.