# Accelerating Min-Max Optimization with Application to Minimal Bounding Sphere

We study the min-max optimization problem where each function contributing to the max operation is strongly-convex and smooth with bounded gradient in the search domain. By smoothing the max operator, we show the ability to achieve an arbitrarily small positive optimality gap of δ in Õ(1/√(δ)) computational complexity (up to logarithmic factors) as opposed to the state-of-the-art strong-convexity computational requirement of O(1/δ). We apply this important result to the well-known minimal bounding sphere problem and demonstrate that we can achieve a (1+ε)-approximation of the minimal bounding sphere, i.e. identify an hypersphere enclosing a total of n given points in the d dimensional unbounded space R^d with a radius at most (1+ε) times the actual minimal bounding sphere radius for an arbitrarily small positive ε, in Õ(n d /√(ε)) computational time as opposed to the state-of-the-art approach of core-set methodology, which needs O(n d /ε) computational time.

## Authors

• 19 publications
• 16 publications
• 9 publications
09/21/2020

### The Complexity of Constrained Min-Max Optimization

Despite its important applications in Machine Learning, min-max optimiza...
01/28/2021

### Potential Function-based Framework for Making the Gradients Small in Convex and Min-Max Optimization

Making the gradients small is a fundamental optimization problem that ha...
02/12/2019

### The Complexity of Max-Min k-Partitioning

In this paper we study a max-min k-partition problem on a weighted graph...
04/17/2020

### On Regularity of Max-CSPs and Min-CSPs

We study approximability of regular constraint satisfaction problems, i....
01/17/2022

### Numerical approaches for investigating quasiconvexity in the context of Morrey's conjecture

Deciding whether a given function is quasiconvex is generally a difficul...
06/22/2020

### A Second-order Equilibrium in Nonconvex-Nonconcave Min-max Optimization: Existence and Algorithm

Min-max optimization, with a nonconvex-nonconcave objective function f: ...
12/14/2017

### Approximation Algorithms for Replenishment Problems with Fixed Turnover Times

We introduce and study a class of optimization problems we coin replenis...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

The min-max optimization has been extensively studied in the literature due to its wide range of applications. It generally appears in the fields of statistics, operations research and engineering under the topics of throttling, resource allocation, computer graphics, computational geometry, clustering, anomaly detection and facility location.

There has been attempts to solve this problem via alternative formulations or smoothing approximations in [1, 2, 3, 4]. The technique of solving the min-max optimization by smoothing the target has been extensively studied in the literature. However, even after these extensive studies, the convergence rate analysis is very limited in literature and existing works generally try to show that they converge to the optimal solution given enough time (existence proofs). An example to the limited convergence rate analysis regarding this problem could be found in [5] which solves the generic non-smooth min-max optimization.

Consequently, to our knowledge, as the first time in the literature, we have derived a major improvement on the convergence guarantees for the min-max optimization problems where the components contributing to the max operation are strongly-convex, smooth and have bounded gradients. Our convergence rate is such that for an optimality gap of , we need computational resource improving upon the optimization complexity of for non-smooth strongly-convex functions having bounded gradients [6].

A specific type of this widely studied optimization problem is named the minimal bounding sphere. There have been several attempts to solve this problem deterministically. The computational complexity is generally super-linear with respect to the number of points and the vector space dimensions such that the dependency is polynomial with integer powers, at times, much larger than 1. Thus, the feasible approaches to this problem generally focus on heuristic methods with experimentally shown efficiency

[7, 8, 9].

An alternative approach to finding minimal bounding spheres, with linear time-complexity dependencies with respect to the number of point and the vector space dimension is the so-called -approximation. The corresponding attempts are based on the core-set constructions [10, 11]. The state-of-the-art of these approximative solutions has been able to find a bounding sphere with radius , where denotes the actual minimal bounding sphere radius, with time-complexity [11] for arbitrarily large number of points and vector space dimension . We improve upon this by showing that the time-complexity can be reduced to (up to logarithmic factors).

We next continue with a rigorous formulation of the problem, after which we demonstrate how the improvements for both the general min-max optimization and the minimal bounding sphere are achieved.

### I-a Problem Description for the General Min-Max Optimization

Our convex optimization problem is such that the function to be minimized is of the form:

 f(x)△=max1≤i≤nfi(x), (1)

where is the number of functions amongst which we select the maximum for a given argument via the operator. Each function is twice-differentiable and displays strong-convexity with Lipschitz-smoothness. The gradients are also assumed to be bounded at least in the subspace of subjected to the iterative search including the optimal point. This subspace can be either preset or naturally occurring due to the nature of our method. Twice differentiability is required since our analysis depends on behavior of the Hessian matrix.

Normally, in (1) is non-smooth due to the maximum operator. However, we will show that by optimizing a substitute function, which approximates the original sufficiently well, we can improve the time dependency of the regular convergence rate from to , where is the big-O notation.

After we finish our discussion on the general setting, we will investigate a special case of this min-max optimization called minimal bounding sphere in Section IV.

## Ii Smooth Approximation of the Max Operator

Let us define the new function , which we shall use as substitute for , as follows:

 gs(x)=1slog(n∑i=1exp(sfi(x))), (2)

where is the natural-logarithm, is the natural-exponentiation and . This form of smooth maximum is also referred to as "LogExpSum". We now present a lemma regarding how well approximates .

###### Lemma 1.

The substitute function is both lower and upper bounded by with the upper bound having an additive redundancy of at most such that

 f(x)≤gs(x)≤f(x)+logns.
###### Proof.

By the definitions of and , we have,

 f(x)=1slog(exp(sf(x)))=1slog(exp(sfj(x))), (3)

where due to (1). Since is a monotonically increasing function and for all , the combination of (2) and (3) yields,

 f(x)≤1slog(n∑i=1exp(sfi(x)))=gs(x). (4)

Again, due to monotonicity of and , we can replace each individual in (4) with as an upper-bound. In combination with (4), this would result in the lemma. ∎

Lemma 1 implies that if we optimize instead, we would incur an additional redundancy of at most as a cost for smoothing the target function.

###### Corollary 1.

The gap between and can be decomposed into a "smoothing" regret and the gap between their smoothed counterparts as follows:

 f(x)−f(y)≤logns+[gs(x)−gs(y)],

where the "smoothing" regret is .

###### Proof.

The result follows directly from Lemma 1

We introduce the short-hand notation for the optimal point minimizing as:

 x∗△=argminx∈Rdf(x). (5)

Next, we investigate derive some properties of this new function , namely the gradient and Hessian, after which we can investigate its strong-convexity and smoothness parameters.

### Ii-a The gradient and the Hessian of the substitute function

We start with a probability vector definition, which is used for writing weighted sums via expectations.

###### Definition 1.

Given the "smoother" and the argument , we generate the probability vector such that:

 ps,i(x)=exp(sfi(x))∑nj=1exp(sfj(x)),

where is the element of the vector .

In the following lemmas, we compute the gradient and, from there, the Hessian of substitute function , which are used for the iterative optimization.

###### Lemma 2.

We can write the gradient as a weighted combination of individual gradients where the weights sum to such that

 ∇gs(x)=Eps(x)[∇fi(x)],

where is the expectation operation with respect to the probability mass function corresponding to the size- vector . Each element of corresponds to the probability assigned to as defined in Definition 1.

###### Proof.

The result directly follows after taking the partial derivatives of (2) with respect to each element in . ∎

###### Lemma 3.

Considering the gradient as a random vector and the Hessian

as a random matrix, each having

possible realizations generated from the probability mass function corresponding to the vector , the Hessian can be computed with the expectation of and the covariance matrix of as follows:

 ∇2gs(x)=s(Σps(x)[∇fi(x)])+Eps(x)[∇2fi(x)],

where , are defined as in Lemma 2 and the covariance matrix is given as,

 Σps(x)[∇fi(x)]△=Eps(x)[∇fi(x)∇fi(x)T]−Eps(x)[∇fi(x)]Eps(x)[∇fi(x)]T. (6)
###### Proof.

The result directly follows from taking further partial derivatives of gradient in Lemma 2. ∎

In the following section, we explain our methodology for accelerating the convergence rate.

## Iii Accelerated Optimization of the Approximation

We utilize Nesterov’s accelerated gradient descent method for smooth and strongly-convex functions, for which more details are given in [12]. The algorithm is an iterative one, where the iterations are done in an alternating fashion. Starting with the initial argument pair , we have the following iterative relations for and for :

 xt+1 =yt−1βs∇gs(yt), (7) yt+1 =xt+1+√κs−1√κs+1(xt+1−xt),

with being the condition number of the Hessian in Lemma 5, which is computed as

 κs=βs/αs, for αsI⪯∇2gs(x)⪯βsI, for all x∈Ks, (8)

where and

are the lower and upper bounds on the eigenvalues of Hessian

, respectively, the identity matrix of

dimensions is denoted as , and is a set guaranteed to include the convex-hull of all iterations , and the optimal point as defined in (5).

Generating the Hessian upper-bound (the smoothness parameter) , and consequently the condition number for the set is sufficient as in (8). The reason is twofold. Firstly, the optimality gap guarantee shown in the following as Lemma 4 is dependent upon upper-bounding the Hessian on line segments pair-wise connecting the algorithm iterations (, ) and the optimal point via . All of such segments are encapsulated by the convex-hull of , and the optimal point . Secondly, this convex-hull is itself a subset of as previously defined.

###### Lemma 4.

The following optimality gap is guaranteed for :

 gs(xt)−gs(x∗)≤(αs2∥x1−x∗∥2+gs(x1)−gs(x∗))exp(−t−1√κs),

where is the condition number. and are the strong-convexity and Lipschitz-smoothness parameters, respectively, and is the optimal point as defined in (5).

###### Proof.

The proof directly follows a similar formulation given in [12] under "the smooth and strong convex case" subsection of the section "Nesterov’s accelerated gradient descent". The only exception is that we do not replace with an upper-bound and leave it as is. ∎

### Iii-a Parameters of Strong-Convexity and Lipschitz-Smoothness

To compute , we bound the eigenvalues of .

###### Lemma 5.

We can lower and upper bound eigenvalues of the Hessian matrix for as follows:

 (min1≤i≤nαs,i)I⪯∇2gs(x)⪯(sL2s+max1≤i≤nβs,i)I,

where and are further defined as the strong-convexity and smoothness parameters for the components from the "" operator generating , i.e. , respectively, such that we have , for . The parameter is a common gradient norm bound for each such that for each and .

###### Proof.

We start with proving the lower-bound relation. Using Lemma 3, we obtain

 ∇2gs(x)⪰Eps(x)[∇2fi(x)],

since the covariance matrix is lower-bounded by as it is a convex combination of rank- self-outer-product matrices with their lowest eigenvalue being .

The expectation operation is linear. Thus we can replace each with its lower-bound without affecting the inequality relation . After taking the constant identity matrix outside of the expectation, we have the renewed relation

 ∇2gs(x)⪰Eps(x)[αs,i]×I.

Since the expectation is a convex combination of scalars , we further lower bound by replacing the expectation with , which gives the lower-bound of this lemma.

For the upper-bound, we can generate

 ∇2gs(x)⪯(max1≤i≤nβs,i)I+s(Σps(x)[∇fi(x)])

using Lemma 3 by upper bounding each with and the resulting expectation with similar to the lower-bound.

We can upper bound the covariance matrix by first noting that the eigenvalues of an -dimensional outer-product are and zeros. Consequently, we upper bound it by replacing the negative outer-product, i.e. , in (6) with . Then, utilizing the linearity of expectation again, we get the final upper-bound by replacing the outer-product inside expectation with . The resulting upper-bound is given as

 ∇2gs(x)⪯(max1≤i≤nβs,i)I+s(Eps(x)[∥∇fi(x)∥2])I,

after taking the constant identity matrix outside of expectation. We can replace the scalar with a common squared gradient-norm bound , which gives the upper-bound relation of this lemma, thus concluding the proof. ∎

### Iii-B Algorithm Description

We start at some point . We determine the "smoother" needed to achieve the requested optimality gap and the set such that includes the optimal point and all future iterations , . We use the update rules in (7) after determining the common gradient norm bound , the individual strong-convexity and Lipschitz-smoothness parameters and , respectively, via the set . The condition number and the smoothness parameter are calculated using the lower and upper bounds in Lemma 5. The pseudo-code is given in Algorithm 1. For this algorithm, we have the following performance result.

###### Theorem 1.

We run Algorithm 1 for a given optimality gap guarantee . Then, we achieve the gap after sufficient iterations such that:

 t∈O(√lognδlog1δ), since t=1+√2δL2slognαs+~βsαslog(1δ(αsD2s+2LsDs)),

where is the big-O notation for asymptotic upper-bounding, is the number of functions contributing to the operation resulting in , is the common gradient norm bound for each component function in the operator such that , for all , . is the strong-convexity parameter of the approximation , is the pseudo-smoothness parameter upper bounding the matrix , and is the unknown initial distance between and .

###### Proof.

From Lemma 4, we see that the lower results in faster convergence for a fixed optimality gap. Without further information on the gradient and Hessian bounds, we need to lower the "smoother" for a lower . However, the "smoothing" regret from Corollary 1 works in the opposite direction. Consequently, we will equate both the optimality gap from the smooth approximation and the "smoothing" regret to . This results in , with being the number of function contributing to the same operation. is generated consequently. Immediately, we have the "smoothing" regret in Corollary 1 as . Then, we equate the gap from using the upper-bound in Lemma 4. Afterwards, we replace the condition number in accordance with (8) after calculating the strong-convexity and smoothness parameters and via Lemma 5. Finally, we upper bound the initial smooth approximation gap with using the convexity relation and arrive at the result of the theorem. ∎

#### Iii-B1 Computational Cost of the Algorithm

###### Corollary 2.

For an optimality gap , the computation time needed is such that for an arbitrarily small . More specifically:

 T∈O(n√lognδlog1δ(cd+log1δ+loglogn)),

where is the average cost of calculating a partial derivative for any , is the number of functions contributing to and is the dimension of the domain of ’s .

###### Proof.

We need iterations as shown in Theorem 1. We observe that each iteration of the while-loop in Algorithm 1 requires partial derivative calculations. Due to the computation of probability vector with respect to Definition 1, each iteration also requires a total of exponentiation to the power of when . Each of such exponentiations has additional computational cost of . Combination of these costs gives the corollary. ∎

#### Iii-B2 Online Version of the Algorithm (without Specifying δ)

###### Corollary 3.

We can achieve the time-complexity in Corollary 2, which is of the form , in an online fashion with no requested optimality gap guarantee . is the soft-O notation ignoring logarithmic factors compared to big-O.

###### Proof.

We initialize with some and run Algorithm 1 with as the optimality guarantee. Then, after sufficient iterations to achieve the requested , we restart Algorithm 1 with a new guarantee , for and repeat non-stop.

For such that for some integer , the total exhausted time can be upper-bounded as follows using the fact that is monotonically increasing and is lower-bounded with ,

 T∈O(n(m∑k=0√logn2−kδ0)log2δ(cd+log2δloglogn))

This bound translates to the same bound in Corollary 2. ∎

In the next section, we shall investigate an interesting specific application for the general accelerated min-max optimization via smooth approximation, which we have introduced.

## Iv (1+ε)-Approximation for the Problem of Minimal Bounding Sphere

Let us suppose we have points, each located at for , in the dimensional space . Our minimization target is such that:

 f(x)=max1≤i≤n∥x−bi∥2. (9)

This is the so-called minimal bounding sphere problem such that it finds an optimal point , which, together with from (9), defines the center and radius of a ball enclosing all of the points in with the smallest possible radius.

The optimal point is defined as:

 x∗△=argminx∈Rdf(x).

Since minimizes the maximum euclidean distance to a point , we know that belongs to the convex-hull of points since we can always decrease these distances by moving towards the convex-hull.

We shall utilize Algorithm 1 with the initial point belonging to this convex-hull, e.g. , the arithmetic mean of the points.

Before running Algorithm 1, we determine the strong-convexity and Lipschitz-smoothness parameters, which are for all in this particular problem. Consequently, the overall strong-convexity and pseudo-smoothness parameters are also , respectively. reveals itself after combining Lemma 5 with (8), and is defined in Theorem 1 as the maximum smoothness parameter from the individual functions. What only remains to be set in Algorithm 1 is the gradient norm upper-bound which inherently includes determining the set guaranteed to include the optimal point , and all iterations , .

### Iv-a Gradient Norm Bound for Minimal Bounding Sphere

Assume the minimal bounding sphere is such that the maximum distance (i.e. radius) between the optimal point and one of the other points is .

###### Lemma 6.

After setting the initial point and computing using (9), we have the following bounds on the minimal bounding sphere radius :

 √f(x1)/2≤R≤√f(x1).
###### Proof.

The upper-bound is trivial since is not necessarily optimal. The lower-bound comes from the fact that belongs to the convex-hull of and, consequently, cannot exceed the diameter of minimal bounding sphere which encloses all points and, hence, their convex-hull. ∎

###### Lemma 7.

The gradient norm upper-bound is such that:

 Ls=6√5f(x1)+δ/2,

where is the initial point of Algorithm 1 and is the requested optimality gap.

###### Proof.

In accordance with this specific problem, we can further upper bound the smooth approximation optimality guarantee in Lemma 4 by first upper bounding the multiplicand in parenthesis on the greater side of the inequality since we have an exponential multiplier, i.e. , which is guaranteed to be non-negative. After also upper bounding this exponential multiplier, since the upper bound of multiplicand turns out to be always nonnegative, we obtain the following result:

 gs(xt)≤5R2+logns, for all t≥1. (10)

This upper bounding takes place by replacing the quantities in Lemma 4 with their corresponding bounds using the facts , , , and for all . The distance inequality is due to a fact that the minimal bounding sphere has its center at , and is contained inside the said sphere since it is encapsulated by the convex-hull of all the points . Similarly, the inequality results from Lemma 1 and , since is again contained in the same minimal bounding sphere with diameter .

Then, by Lemma 1, (10), and setting as in Algorithm 1 for a given optimality gap guarantee , we get

 f(xt)≤5R2+δ/2, for all t≥1. (11)

Regarding the gradients for minimal bounding sphere problem, using the expectation form of the gradient in Lemma 2 and incorporating the function definition in (9), we have:

 ∇gs(x)=2(x−Eps(x)[bi]) (12)

Combining (11) and (12), we have a bound on the gradient norms of the smoothing function at points as

 ∥∇gs(xt)∥≤2√5R2+δ/2, (13)

since we can claim , which results from the distance between and some weighted average of points , specifically , being at most the distance between and the point farthest to it, i.e. .

Let us next investigate the gradients at , which are calculated on Line 1 of Algorithm 1.

For , its norm is upper-bounded by since the diameter of minimal bounding sphere is which includes the initialization . For , combining (12) and Line 1 from Algorithm 1, we have

 12⋅∇gs(yt)=xt+(1−2(√κs+1)−1)(xt−xt−1)−Eps(yt)[bi].

Using the triangle inequality and by upper bounding the negative terms with 0,

 ∥∇gs(yt)∥≤4∥xt−Eps(yt)[bi]∥+2∥xt−1−Eps(yt)[bi]∥.

Finally, using (11), we have

 ∥∇gs(yt)∥≤6√5R2+δ/2, (14)

as we can claim like before.

With (13) and (14), we have bounded the gradient norms at all iterations and . We take an arbitrary , member to the convex-hull of , and the optimal point . As discussed in Section III, it is sufficient to generate a gradient norm upper bound for this arbitrary point to obtain . Since is a convex combination of , and , we decompose it into individual parts and insert that version of into (12). Using triangle inequality and the claim for any pair of , the common gradient norm bound turns out to be the maximum of bounds (13) and (14), since the gradient at optimal point is 0. Consequently, we can set . ∎

### Iv-B Convergence result

Before examining the convergence result, we note that, for minimal bounding sphere problem, the -approximation translates into converging to a bounding sphere with radius . Consequently, we have that, for some :

 f(xt)−f(x∗)≤(1+ε)2R2−R2≤(2ε+ε2)R2,

meaning the requested optimality gap , i.e for .

###### Theorem 2.

For the minimal bounding sphere problem, we can generate an approximate solution by achieving a bounding sphere with radius for an arbitrarily small positive using Algorithm 1. After setting for all , and , the overall computational complexity and the total number of iterations by the algorithm are

 T∈~O(nd√1ε), and t=1+log(1+4ε)√1+18(1+20ε)logn

where is the soft-O notation which ignores the logarithmic additives and multipliers.

###### Proof.

We plug in the for all and into the result of Theorem 1 regarding the number of iterations required. We can upper bound the right side of the equality for before using here, since it can only provide further guarantees as shown in Lemma 4. We also plug in the initial distance by definition of , the radius of the minimal bounding sphere, and the selection of from the convex-hull of . We note that from Lemma 6 and bound with . Instead of upper bounding by , as previously done in Theorem 1, we can use the upper bound of resulting from Corollary 1 and setting since we have due to . Lastly, we upper bound the reciprocal of optimality gap, i.e. with since . ∎