Continuous optimization

09/03/2019
by   Xiaopeng Luo, et al.
0

Sufficient conditions for the existence of efficient algorithms are established by introducing the concept of contractility for continuous optimization. Then all the possible continuous problems are divided into three categories: contractile in logarithmic time, contractile in polynomial time, or noncontractile. For the first two, we propose an efficient contracting algorithm to find the set of all global minimizers with a theoretical guarantee of linear convergence; for the last one, we discuss possible troubles caused by using the proposed algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/03/2019

Contractility of continuous optimization

By introducing the concept of contractility, all the possible continuous...
09/03/2019

Contraction methods for continuous optimization

We describe a class of algorithms that establishes a contracting sequenc...
11/28/2017

Seeded Graph Matching: Efficient Algorithms and Theoretical Guarantees

In this paper, a new information theoretic framework for graph matching ...
06/28/2019

Algorithms for weighted independent transversals and strong colouring

An independent transversal (IT) in a graph with a given vertex partition...
11/05/2020

Optimization of Virtual Networks

We introduce a general and comprehensive model for the design and optimi...
08/01/2020

Dividing Bads is Harder than Dividing Goods: On the Complexity of Fair and Efficient Division of Chores

We study the chore division problem where a set of agents needs to divid...
04/09/2014

Noisy Optimization: Convergence with a Fixed Number of Resamplings

It is known that evolution strategies in continuous domains might not co...

Code Repositories

JCOOL

Java COntinuos Optimization Library


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Acknowledgments

Author contributions: X.L. performed the theoretical analysis and designed the algorithm, X.X. implemented the program, ran experiments, and produced all the plots; X.L. wrote the manuscript. All authors discussed and commented on the experiments and the manuscript. Competing interests: The authors declare no competing financial interests.

Supplementary materials

Section S1. Method
Section S2. Monotonic convergence
Section S3. Hierarchical low frequency dominant functions
Section S4. Category I: contractile in logarithmic time
Section S5. Category II: contractile in polynomial time
Section S6. Category III: noncontractile
Fig. S1
References
Data S1

Section S1. Method

For the median-type contracting algorithm, the contracting sequence of length , , is established as follows: define and

(S1)

where are uniformly distributed over with the sample size , the relevant data values with the current and , and is an approximation of w.r.t. the data pairs such that

(S2)

here, is a given threshold, is the percentage of falling into , and is the percentage of samples uniformly distributed over falling into .

Two key parts of the algorithm are (i) the method for constructing a model to fit the given data on , and (ii) the sampling strategy for further generating uniform samples over according to some known interior points. In this work, we use Gaussian process (GP) regression for constructing a model to fit the data pairs

. It has merit in the flexible expressivity because its hyperparameters are automatically tuned with the likelihood maximization. See Data S1 for technical details and (

1) for more.

Now consider the sampling strategy. Suppose are given points and is a union of sample sets , where contains samples from a normal distribution with mean

and variance

(sufficiently large to cover ). Further, let . Now we are going to add a new point from to and should preferably fill in the gaps in the distribution of . Actually, this subsequent point could be determined as

(S3)

One can generate uniform samples in by using the above step recursively, see Fig. S1 for an example to illustrate how the method is performed. This recursive algorithm is closely related to the Voronoi diagrams (2), because the added point will be always in the largest Voronoi cell of if the size of is large enough.

Figure S1: Two-dimensional illustration of the performance of the recursive algorithm given in Eq. S3. (A) The domain is shown as the interior of the circle, the uniform samples in , that is, the first 50 points of the -dimensional halton sequence, are visible as dots in blue, and those samples falling into , denoted by , are shown as asterisks in black. (B) The candidate set , which is a union of sample sets from a normal distribution centered on each , is visible as circledots. (C) Samples added recursively from is visible as circledots in blue, so successive points at any stage “know” how to fill in the gaps in the previously generated distribution. See Data S1 for technical details.

Section S2. Monotonic convergence

In this section we explain the monotonic convergence: if is not a constant on and the sequence is generated by Eqs. 1 and 2 (not limited to the median-type aigorithm), then for every .

The monotonic convergence proceeds by induction on . First, is trivial since is not a constant on . Assume that , then , where

(S4)

now we will show that . Let , then it follows from Eq. S2 that . Hence, for any , we have and it can be further rewritten as

(S5)

and equivalently,

(S6)

thus, , that is, ; meanwhile, since .

Section S3. Hierarchical low frequency dominant functions

Without loss of generality, assume that on hereafter. We show that if is a -type hierarchical low frequency dominant function, then there exist a class of approximations such that the corresponding error bounds are reduced by a factor of every time the number of function evaluations doubles.

For any nonnegative integer and , we define the th -bandlimited function by

(S7)

where is the inner product of and ; then according to the Nyquist-Shannon sampling theorem, can be reconstructed by its samples corresponding to a sampling density of . Further, let , where and

(S8)

then and the condition Eq. 3 can be rewritten as

(S9)

So it follows that

(S10)

so the error bounds are reduced by a factor of every time the number of function evaluations doubles, that is, the corresponding sampling density is increased from to . Furthermore, by noting that

(S11)

the error bound can also be rewritten as

(S12)

So the error bounds are reduced by a factor of every time the sampling density doubles. In the next section we will further consider the number of samples on the compact sets.

Section S4. Category I: contractile in logarithmic time

For functions in category I, the key to gaining logarithmic time efficiency comes from two reasons: (i) the median of on is reduced by a factor of as increases and (ii) there is an upper bound on the number of function evaluations on every such that a certain approximation can be constructed by these samples and the error bound is less than or equal to the median of on .

Assume that the sequence are defined recursively by and

(S13)

where is a uniformly distributed random variable over and is the unique integer such that

(S14)

Then it is clear that

(S15)

moreover, since is a -type tempered function on ,

(S16)

Further, note that is a -type hierarchical low frequency dominant function, it follows that for every ,

(S17)

then according to the convergence shown in section 3, is contained in every ; in addition, since is a critical regular function on , and the Fourier approximation can be reconstructed by its samples corresponding to a sampling density of .

The restriction of to , denoted by , is closely related to the threshold value and the prolate spheroidal functions

which are the eigenfunctions of the time and frequency limiting operator

(3-7). More specifically, if we denote a prolate series up to and including the th term by

(S18)

then follows by noting that has a Fourier transform supported in . Moreover, it is important to mention that a super-exponential decay rate of the error bound as soon as reaches or goes beyond the plunge region around the threshold value (8-9).

Therefore, for every there exist and such that could be constructed by samples of over and

(S19)

and then,

(S20)

further, for a fixed accuracy , there exists a such that

(S21)

hence, after median-type contractions, one gets the approximate solution set with

(S22)

and the total number of function evaluations is less than

(S23)

where is the smallest integer greater than or equal to ; let the computational complexity of is , then the time complexity is less than

(S24)

taking a logarithmic time for any desired accuracy .

Section S5. Category II: contractile in polynomial time

In the above discussion, Eq.4 helps us control the bound of the number of function evaluations as a constant value in each contraction; in fact, even without Eq.4, the bound can also be controlled in a probability sense. So in the following we first establish a weaker version of Eq.4, that is, if is a compact set and is continuous and not a constant on , then for any , there must exist a such that

(S25)

holds for every with probability at least , where is a uniformly distributed random variable on . From the definition Eq. S13, Eq. S23 is equivalent to saying that the upper bound of is less than of the median of on with probability at least .

Suppose is a uniformly distributed random variable on , , and . Under the assumption of , since is continuous and not a constant on , we have and and there exists a such that . First, the distance between the median and the mean is bounded by standard deviation (10), i.e.,

(S26)

and it follows from Chebyshev s inequality that

(S27)

So it holds that

(S28)

with probability at least , that is,

(S29)

where .

Let us now see what happens after replacing condition Eq. 4 with Eq. S23. Assume that

(S30)

If is predictable in polynomial time, then there exist and such that Eq. 3 holds; similarly, for every , it holds that

(S31)

with probability at least . Further, according to the convergence shown in section 3, is contained in every with probability at least ; in addition, and the Fourier approximation can be reconstructed by its samples corresponding to a sampling density of . And according to the discussion in the previous subsection, for every there exist and such that could be constructed by less than samples of over and

(S32)

further, for a fixed accuracy , there exists a such that

(S33)

hence, after contractions, one can obtain the approximate solution set with

(S34)

and clearly, the total number of function evaluations is much less than

(S35)

similarly, let the computational complexity of is , then the time complexity is much less than

(S36)

taking a polynomial time for any desired accuracy . And it is worth noting that the algorithm is not limited to the median-type.

Section S6. Category III: noncontractile

For a noncontractile , the contracting algorithm degrades totally into a model-based approach. In the following we consider the time complexities for sufficient smooth functions, Hölder continuous functions and non-Hölder continuous functions, respectively.

Suppose that are uniformly distributed over with the sample size , the relevant data values and interpolates on . For a sample set over , we denote the associated fill distance with

(S37)

then for , there exists such that

(S38)

or, , where ; see Lemma 12 of (11).

If with on a bounded domain , then there exists a band-limited interpolant (see Lemma 3.9 of (12)) such that for any , it holds that

(S39)

where , see Theorem 3.10 of (12); then for , the time complexity is

(S40)

where the complexity of is . And this result is similar to that given in (11).

If satisfies a -Hölder condition, i.e., there exist and such that , then there exists a nearest-neighbor interpolant which is closely related to the Voronoi diagram of , such that for any , it holds that

(S41)

then similarly, for , the time complexity is

(S42)

Further, if does not satisfy any Hölder condition, then the time complexity is larger than for all , so the algorithm shall not be done in polynomial time.

References and Notes

  1. C. E. Rasmussen, C. K. I. Williams, Gaussian Processes for Machine Learning (MIT Press, Cambridge, MA, 2006).

  2. F. Aurenhammer, Voronoi diagrams - a survey of a fundamental geometric data structure. ACM Comput. Surv. 23, 345-405 (1991). doi: 10.1145/116873.116880

  3. D. Slepian, H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty, I. Bell Systems Tech. J. 40, 43-64 (1961). doi: 10.1002/j.1538-7305.1961.tb03976.x

  4. H. J. Landau, H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty, II. Bell Systems Tech. J. 40, 65-84 (1961). doi: 10.1002/j.1538-7305.1961.tb03977.x

  5. H. J. Landau, H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty, III. Bell Systems Tech. J. 41, 1295-1336 (1962). doi: 10.1002/j.1538-7305.1962.tb03279.x

  6. D. Slepian, Prolate spheroidal wave functions, Fourier analysis and uncertainty, IV. Bell Systems Tech. J. 43, 3009-3057 (1964). doi: 10.1002/j.1538-7305.1964.tb01037.x

  7. D. Slepian, On bandwidth. Proc. IEEE 64, 292-300 (1976). doi: 10.1109/PROC.1976.10110

  8. J. P. Boyd, Approximation of an analytic function on a finite real interval by a bandlimited function and conjectures on properties of prolate spheroidal functions. Appl. Comput. Harmon. Anal. 25, 168-176 (2003). doi: 10.1016/S1063-5203(03)00048-4

  9. A. Bonamia, A. Karoui, Spectral decay of time and frequency limiting operator. Appl. Comput. Harmon. Anal. 42, 1-20 (2017). doi: 10.1016/j.acha.2015.05.003

  10. C. Mallows, Another comment on O’Cinneide. The American Statistician 45, 257 (1991). doi: 10.1080/00031305.1991.10475815

  11. A. D. Bull, Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res. 12, 2879-2904 (2011).

  12. F. J. Narcowich, J. D. Ward, H. Wendland, Sobolev bounds on functions with scattered zeros, with applications to radial basis function surface fitting.

    Math. Comp. 74, 743-763 (2005). doi: 10.1090/S0025-5718-04-01708-9