Java COntinuos Optimization Library
Sufficient conditions for the existence of efficient algorithms are established by introducing the concept of contractility for continuous optimization. Then all the possible continuous problems are divided into three categories: contractile in logarithmic time, contractile in polynomial time, or noncontractile. For the first two, we propose an efficient contracting algorithm to find the set of all global minimizers with a theoretical guarantee of linear convergence; for the last one, we discuss possible troubles caused by using the proposed algorithm.READ FULL TEXT VIEW PDF
Java COntinuos Optimization Library
Author contributions: X.L. performed the theoretical analysis and designed the algorithm, X.X. implemented the program, ran experiments, and produced all the plots; X.L. wrote the manuscript. All authors discussed and commented on the experiments and the manuscript. Competing interests: The authors declare no competing financial interests.
Section S1. Method
Section S2. Monotonic convergence
Section S3. Hierarchical low frequency dominant functions
Section S4. Category I: contractile in logarithmic time
Section S5. Category II: contractile in polynomial time
Section S6. Category III: noncontractile
For the median-type contracting algorithm, the contracting sequence of length , , is established as follows: define and
where are uniformly distributed over with the sample size , the relevant data values with the current and , and is an approximation of w.r.t. the data pairs such that
here, is a given threshold, is the percentage of falling into , and is the percentage of samples uniformly distributed over falling into .
Two key parts of the algorithm are (i) the method for constructing a model to fit the given data on , and (ii) the sampling strategy for further generating uniform samples over according to some known interior points. In this work, we use Gaussian process (GP) regression for constructing a model to fit the data pairs
. It has merit in the flexible expressivity because its hyperparameters are automatically tuned with the likelihood maximization. See Data S1 for technical details and (1) for more.
Now consider the sampling strategy. Suppose are given points and is a union of sample sets , where contains samples from a normal distribution with mean
and variance(sufficiently large to cover ). Further, let . Now we are going to add a new point from to and should preferably fill in the gaps in the distribution of . Actually, this subsequent point could be determined as
One can generate uniform samples in by using the above step recursively, see Fig. S1 for an example to illustrate how the method is performed. This recursive algorithm is closely related to the Voronoi diagrams (2), because the added point will be always in the largest Voronoi cell of if the size of is large enough.
In this section we explain the monotonic convergence: if is not a constant on and the sequence is generated by Eqs. 1 and 2 (not limited to the median-type aigorithm), then for every .
The monotonic convergence proceeds by induction on . First, is trivial since is not a constant on . Assume that , then , where
now we will show that . Let , then it follows from Eq. S2 that . Hence, for any , we have and it can be further rewritten as
thus, , that is, ; meanwhile, since .
Without loss of generality, assume that on hereafter. We show that if is a -type hierarchical low frequency dominant function, then there exist a class of approximations such that the corresponding error bounds are reduced by a factor of every time the number of function evaluations doubles.
For any nonnegative integer and , we define the th -bandlimited function by
where is the inner product of and ; then according to the Nyquist-Shannon sampling theorem, can be reconstructed by its samples corresponding to a sampling density of . Further, let , where and
then and the condition Eq. 3 can be rewritten as
So it follows that
so the error bounds are reduced by a factor of every time the number of function evaluations doubles, that is, the corresponding sampling density is increased from to . Furthermore, by noting that
the error bound can also be rewritten as
So the error bounds are reduced by a factor of every time the sampling density doubles. In the next section we will further consider the number of samples on the compact sets.
For functions in category I, the key to gaining logarithmic time efficiency comes from two reasons: (i) the median of on is reduced by a factor of as increases and (ii) there is an upper bound on the number of function evaluations on every such that a certain approximation can be constructed by these samples and the error bound is less than or equal to the median of on .
Assume that the sequence are defined recursively by and
where is a uniformly distributed random variable over and is the unique integer such that
Then it is clear that
moreover, since is a -type tempered function on ,
Further, note that is a -type hierarchical low frequency dominant function, it follows that for every ,
then according to the convergence shown in section 3, is contained in every ; in addition, since is a critical regular function on , and the Fourier approximation can be reconstructed by its samples corresponding to a sampling density of .
The restriction of to , denoted by , is closely related to the threshold value and the prolate spheroidal functions
which are the eigenfunctions of the time and frequency limiting operator(3-7). More specifically, if we denote a prolate series up to and including the th term by
then follows by noting that has a Fourier transform supported in . Moreover, it is important to mention that a super-exponential decay rate of the error bound as soon as reaches or goes beyond the plunge region around the threshold value (8-9).
Therefore, for every there exist and such that could be constructed by samples of over and
further, for a fixed accuracy , there exists a such that
hence, after median-type contractions, one gets the approximate solution set with
and the total number of function evaluations is less than
where is the smallest integer greater than or equal to ; let the computational complexity of is , then the time complexity is less than
taking a logarithmic time for any desired accuracy .
In the above discussion, Eq.4 helps us control the bound of the number of function evaluations as a constant value in each contraction; in fact, even without Eq.4, the bound can also be controlled in a probability sense. So in the following we first establish a weaker version of Eq.4, that is, if is a compact set and is continuous and not a constant on , then for any , there must exist a such that
holds for every with probability at least , where is a uniformly distributed random variable on . From the definition Eq. S13, Eq. S23 is equivalent to saying that the upper bound of is less than of the median of on with probability at least .
Suppose is a uniformly distributed random variable on , , and . Under the assumption of , since is continuous and not a constant on , we have and and there exists a such that . First, the distance between the median and the mean is bounded by standard deviation (10), i.e.,
and it follows from Chebyshev s inequality that
So it holds that
with probability at least , that is,
Let us now see what happens after replacing condition Eq. 4 with Eq. S23. Assume that
If is predictable in polynomial time, then there exist and such that Eq. 3 holds; similarly, for every , it holds that
with probability at least . Further, according to the convergence shown in section 3, is contained in every with probability at least ; in addition, and the Fourier approximation can be reconstructed by its samples corresponding to a sampling density of . And according to the discussion in the previous subsection, for every there exist and such that could be constructed by less than samples of over and
further, for a fixed accuracy , there exists a such that
hence, after contractions, one can obtain the approximate solution set with
and clearly, the total number of function evaluations is much less than
similarly, let the computational complexity of is , then the time complexity is much less than
taking a polynomial time for any desired accuracy . And it is worth noting that the algorithm is not limited to the median-type.
For a noncontractile , the contracting algorithm degrades totally into a model-based approach. In the following we consider the time complexities for sufficient smooth functions, Hölder continuous functions and non-Hölder continuous functions, respectively.
Suppose that are uniformly distributed over with the sample size , the relevant data values and interpolates on . For a sample set over , we denote the associated fill distance with
then for , there exists such that
or, , where ; see Lemma 12 of (11).
If with on a bounded domain , then there exists a band-limited interpolant (see Lemma 3.9 of (12)) such that for any , it holds that
where , see Theorem 3.10 of (12); then for , the time complexity is
where the complexity of is . And this result is similar to that given in (11).
If satisfies a -Hölder condition, i.e., there exist and such that , then there exists a nearest-neighbor interpolant which is closely related to the Voronoi diagram of , such that for any , it holds that
then similarly, for , the time complexity is
Further, if does not satisfy any Hölder condition, then the time complexity is larger than for all , so the algorithm shall not be done in polynomial time.
References and Notes
C. E. Rasmussen, C. K. I. Williams, Gaussian Processes for Machine Learning (MIT Press, Cambridge, MA, 2006).
F. Aurenhammer, Voronoi diagrams - a survey of a fundamental geometric data structure. ACM Comput. Surv. 23, 345-405 (1991). doi: 10.1145/116873.116880
D. Slepian, H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty, I. Bell Systems Tech. J. 40, 43-64 (1961). doi: 10.1002/j.1538-7305.1961.tb03976.x
H. J. Landau, H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty, II. Bell Systems Tech. J. 40, 65-84 (1961). doi: 10.1002/j.1538-7305.1961.tb03977.x
H. J. Landau, H. O. Pollak, Prolate spheroidal wave functions, Fourier analysis and uncertainty, III. Bell Systems Tech. J. 41, 1295-1336 (1962). doi: 10.1002/j.1538-7305.1962.tb03279.x
D. Slepian, Prolate spheroidal wave functions, Fourier analysis and uncertainty, IV. Bell Systems Tech. J. 43, 3009-3057 (1964). doi: 10.1002/j.1538-7305.1964.tb01037.x
D. Slepian, On bandwidth. Proc. IEEE 64, 292-300 (1976). doi: 10.1109/PROC.1976.10110
J. P. Boyd, Approximation of an analytic function on a finite real interval by a bandlimited function and conjectures on properties of prolate spheroidal functions. Appl. Comput. Harmon. Anal. 25, 168-176 (2003). doi: 10.1016/S1063-5203(03)00048-4
A. Bonamia, A. Karoui, Spectral decay of time and frequency limiting operator. Appl. Comput. Harmon. Anal. 42, 1-20 (2017). doi: 10.1016/j.acha.2015.05.003
C. Mallows, Another comment on O’Cinneide. The American Statistician 45, 257 (1991). doi: 10.1080/00031305.1991.10475815
A. D. Bull, Convergence rates of efficient global optimization algorithms. J. Mach. Learn. Res. 12, 2879-2904 (2011).