1 Introduction
Stochastic optimization [18] is the task of optimizing certain objective functional by generating and using stochastic random variables. Usually the stochastic optimization is an iterative process of generating random variables that progressively finds out the minima or the maxima of the objective functional. Stochastic optimization is usually applied in the nonconvex functional spaces where the usual deterministic optimization such as linear or quadratic programming or their variants cannot be used. Stochastic optimization is performed in discrete spaces such as generalized hill climbing [13], and continuous spaces [1]. In this article, we focus only on the stochastic optimization task in the continuous domain.
Stochastic optimization in continuous domain includes a large number of different algorithms that include stochastic gradient descent
[15], simulated annealing [14, 7, 3, 4, 8], evolutionary algorithms [10, 5], tabu search [9, 6], and many others. Stochastic gradient descent and quasiNewton techniques usually find out a local optima in the search space. On the other hand, simulated annealing can find out the global optima with a proper temperature schedule [7]. Evolutionary algorithms are also proven to obtain the global optima under certain conditions [5]. However, most of the existing techniques require specification of userdefined parameters. For example, simulated annealing performance is highly dependent on the cooling schedule. Evolutionary algorithms depend on the crossover and mutation probabilities that are defined by the user. Secondly, most of the stochastics search techniques operate with tunable parameters. For example, in simulated annealing, the temperature is gradually reduced with certain cooling schedule. In evolutionary algorithms also, the crossover and mutation probabilities are usually reduced over iterations. In other words, these algorithms are mostly not adaptive. By adaptivity of an algorithm we mean, if the objective functional changes over time, the algorithm will be able to follow the new optimal points according to the changed search space structure. If the userdefined parameters are reduced gradually, the algorithm converges to the optimal point but loses the capability of adjusting the solution space to the changing search space structure if the objective functional changes.
In this paper, we propose a parameterfree adaptive stochastic optimization algorithm for continuous random variables (ASOC) that is not only independent of the choice of any userdefined parameter but is also able to adapt to the changes in the serach space structure for a changing objective functional. We derive the idea of optimization from the generative models in pattern classification [17]
. First we consider a sample pool and obtain their corresponding functional values. We then define ordered pairs of the samples in such a way that if a sample has less functional value than that of the next sample in the ordered pair then it belongs to a particular class. We then iteratively generate ordered pairs from this class such that the first sample in the ordered pair has less functional value than the second sample in the ordered pair. Thus we iteratively generate samples as obtained from the generated ordered pairs that progressively reduces the functional value. As the process converges i.e., there is no more decrease in the functional value, we obtain the minima of the optimization function. An analogous process can be followed if the task is to maximize the objective function. ASOC has a similarity with the stochastic gradient descent where a sample is updated based on the local gradient of the objective function
[15]. However, we never compute the gradient of the objective function explicitly. In other words, we are not constrained by the fact that the objective function need to be locally differentiable. ASOC can be applied to any stochastic optimization for continuous variables even if the function is not expressible in a mathematical form but can be computed using the sample values. In the literature of Bayesian optimization [16], a similar approach is followed, however, the Bayesian optimization techniques do not use the concept of generative models of the ordered pairs to minimize or maximize the functional values.2 Problem Formulation
2.1 Representation
Let the optimization problem be finding out an ,
(1) 
subject to such that the task is to find out the minima of the function . Here is not necessarily expressible in parametric form and not necessarily a smooth function. In practice, several such optimization tasks exist where it is extremely difficult to express a suitable functional form of the optimization problem mathematically. In this paper, we do not assume any form of the function. The optimization problem is such that for any given
dimensional vector
, the objective can be evaluated.The generic representation structure of the proposed algorithm is analogous to that of the evolutionary algorithms. Here we maintain a pool of vectors and their corresponding objective values . The algorithm procees iteratively, and at every iteration it generates a new pool of candidate vectors . The algorithm then finds out a set of best fitting candidate vectors, as evaluated by the objective function, from the . Next, the entire process is repeated until there is no more change in the best fitting solution. The strategy of generating new candidate vectors is derived from the idea of generative models in pattern classification task [17] where we define synthetic class structures consisting of ordered pair of samples. We then generate new samples from this class structure such that a new sample is randomly drawn that is expected to be better than the best in the existing pool of samples.
2.2 Optimization as a Generative Model
As mentioned before, the pool of candidate vectors is represented as . Without loss of generality, let us assume that the pool of vectors be sorted as according to their objective values such that in ,
(2) 
where denote the objective functional value of the vector . With this representation, we transform the problem into a space of ordered pair of vectors where indicates transpose. In other words, let the vector notation of the dimensional vector be given as . The concatenated vector is then given as
(3) 
We therefore obtain such ordered pair of vectors for all , , . We partition these concatenated vectors into two classes namely and each containing concatenated samples such that
(4) 
Once we obtain such a partition, the class structures of and
are defined by the pool vectors subject to certain density estimate. Once the class structure is defined, the next task is to obtain one candidate vector
such that(5) 
for all . In other words, we need to find out one candidate vector which is better than the existing pool vectors in terms of the objective values. It is equivalent to finding out one that is better than the best pool vector such that
(6) 
with a sorted pool .
In order to find , we use the concept of conditional distribution of conditioned on such that Equation (6) is satisfied. The distribution of is approximated as normal such that
(7) 
and
(8) 
where and are determined from all samples in . Then the distribution of with the condition is given as where
(9) 
and is given as (Schur complement)[20]
(10) 
Once we obtain the distribution of as , we generate new samples from that distribution.
The new sample generation process is similar to stochastic gradient descent [15] process except the fact that the new samples are generated from the estimated target distribution instead of a deterministic point generated from the gradient. The nature of the target distribution depends on the previous distribution of the samples. In simulated annealing, the acceptance probability of an inferior solution is modulated by where is the temperature and is the increase in the objective functional value of the inferior solution. As goes to zero, the acceptance probability goes to zero. In ASOC, we guide the selection process to iteratively adapt the new solution towards the minima. In our case, there is no temperature schedule or cooling process as used in the simulated annealing. Our technique is completely adaptive and depends on the pool of samples. Even if the functional value changes, the technique automatically adapts the samples to select the new optima.
We start with a randomly generated sample pool in D. Let be a sample pool having samples. We first sort the in ascending order according to the functional values of the samples, and select the top samples from that pool. Let this sample pool be having samples. We then compute from these sample pool . Next we draw samples randomly in D using . Let this sample pool be . We then have the sample set , and again sort in ascending order of the functional values and repeat the entire process to generate new set of samples. We iteratively generate the new samples until there is no significant change in the best solution. Note that the samples in may be inferior to the best sample in and in that case the best sample in will automatically move to the next iteration. In other words, we always follow the elitist selection mechanism unlike the simulated annealing.
2.3 Overall Algorithm
We summarize the overall algorithm in this section.
Problem: Find the minima of a given objective function in the dimensional continuous space such that
(11) 
subject to . The objective function need not be continuous and differentiable.

Randomly initialize a sample pool with samples in the dimensional space such that each sample is in .

Sort the sample pool in the ascending order of the functional values and choose top samples from the sorted pool to construct the sample set .

Randomly draw samples from the target distribution and constrain the samples such that the sample set . Construct the set .

Repeat the process from Step 2 until some stopping criteria is satisfied.
In ASOC, if then any new sample vector is not generated. However, we do not omit the inferior samples from a sample pool but they become iteratively better. Thus even if there is no change in the best sample in the sample pool, the other samples may get iteratively better.
One of the major advantage of the proposed search algorithm is that it is totally free from any user defined parameter. The stateoftheart stochastic search algorithms such as the class of simulated annealing and genetic algorithms highly depend on userdefined parameters. For example, in simulated annealing, the search process is guided by an artificial cooling schedule defined by temperature. The schedule of decreasing the temperture is decided beforehand. Similarly, in evolutionary algorithms, the performance depends on the crossover and mutation probabilities and these probability values are userdefined.
3 Experimental Results
There exists a large number of benchmark functions in the literature [12] for testing the effectiveness of stochastic optimization algorithms. A subset of these functions is available in [19]. We used the same subset of functions as in [19] for testing the effectiveness of ASOC. Table 1 enlists the functions that we used in our experiments. We demonstrate the effectiveness of ASOC in optimizing these functions and compare ASOC with simulated annealing and genetic algorithms using the same set of functions.
We have implemented the ASOC in Matlab in the Windows XP environment. We have chosen a population size (N) = 30 and observed the convergence properties of the ASOC algorithm for 2000 generations. As a comparison, we optimized the functions using both simulated annealing and genetic algorithm for continuous variables. In simlated annelaing, we iterated for 2000 iterations and in each iteration we generated samples randomly with a constant temperature for 50 times. We reduced the temperature following a logarithmic schedule over 2000 iterations. For the genetic algorithm, we used elitist model where the best chromosome is always passed into the next generation.
In Table 2, we show the effectiveness of ASOC along with SA and GA for 100, 500, and 2000 iterations respectively. In the implementation of SA, the temperature has been reduced according to the number of iterations. For a smaller number of iterations, the temperature is reduced quickly and for a large number of iterations, temperature is reduced rather slowly.
From Table 2, we observe that GA and ASOC can obtain the optimal points in most of the cases. For Easom function, none of the techniques are successful in obtaining the minimal point. For Rosenbrock function, we observe that ASOC outperforms GA for a dimensionality equal to 3. For the same function, simulated annealing did not converge.
In simulated annealing, the temperature is reduced to obtain the global optima. However, if the nature of the optimization function changes, then SA will not be able to adapt to the new situation and find the new optima. On the other hand, ASOC is practically parameterfree optimization technique and it continues to generate new samples in the vicinity of the optima once it has converged. If the nature of the optimization function changes then it gracefully switches over to the new optima location and adapts the solution space. In order to show the effectiveness of ASOC to adapt to the new situation, we change the function from function number 2 to 18 (as in Table 1) and run ASOC for each function for 2000 iterations without reinitializing the samples. It is as if once the algorithm has converged, a new optima appears. We did not consider the function number 1 (Table 1) because function 1 and function 2 has the same optima locations. Figure 1 illustrates how the optima changes as the function changes. We observe that ASOC is indeed able to follow the changing pattern of the optimization problem.
4 Discussion
We have presented a new adaptive parameterfree stochastic optimization technique called ASOC. We have demonstrated that ASOC can find the optimal solution on certain benchmark problems. Simulated annealing converges to the global optima with suitably chosen cooling schedule [3, 11]. Evolutionary algorithms are also globally convergent under certain conditions [5]. The convergence properties of ASOC require further analysis. A possible approach towards proving the convergence under a generalized framework of such optimization algorithms is provided in [21, 2].
We generate new samples considering the class of ordered pair of samples as a single cluster and therefore we derive a single mean and covariance matrix. It is possible to extend ASOC by clustering the ordered pair of samples into different clusters and generating means and covariance matrices for each cluster separately. In this way, we will have more variability in the generated samples that may lead to better convergence. ASOC finds out one optimal point for a given single objective functional. It is possible to extend ASOC to find out paretooptimal solutions for multiobjective functionals as a constituent future work.
References

[1]
K. P. Bennett and E. ParradoHernández.
The interplay of optimization and machine learning research.
Journal of Machine Learning Research, 7:1265 –1281, 2006. 
[2]
H. Dawid.
A markov chain analysis of genetic algorithms with a state dependent fitness function.
Complex Systems, 8:407–417, 1994.  [3] U. Faigle and W. Kern. Note on the convergence of simulated annealing algorithms. SIAM Journal on Control and Optimization, 29:153 –159, 1991.
 [4] M. Fielding. Simulated annealing with an optimal fixed temperature. SIAM Journal of Optimization, 11:289 –307, 2000.
 [5] D. B. Fogel. Asymptotic convergence properties of genetic algorithms and evolutionary programming: Analysis and experiments. Cybernetics and Systems, 25:389–407, 1994.
 [6] B.L. Fox. Integrating and accelerating tabu search, simulated annealing, and genetic algorithms. Annals of Operations Research, 41:47 –67, 1993.
 [7] B.L. Fox. Faster simulated annealing. Siam Journal of Optimzation, 5:485 –505, 1995.
 [8] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of lmages. IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI6:721–741, 1984.
 [9] F. Glover. Tabu search for nonlinear and parametric optimization (with links to genetic algorithms). Discrete Applied Mathematics, 49:231 –255, 1994.
 [10] D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. AddisonWesley, Reading, MA, 1989.
 [11] V. Granville, M. Kfivanek, and J.P. Rasson. Simulated annealing: A proof of convergence. IEEE Trans. Pattern Analysis and Machine Intelligence, 16:652–656, 1994.
 [12] M. Jamil and X.S. Yang. A literature survey of benchmark functions for global optimization problems. Int. Journal of Mathematical Modelling and Numerical Optimisation, 4:150 –194, 2013.
 [13] A.W. Johnson and S.H. Jacobson. A class of convergent generalized hill climbing algorithms. Applied Mathematics and Computation, 125:359 –373, 2002.
 [14] S. Kirkpatrick, C.D. Gelatt, Jr., and M.P. Vecchi. Optimization by simulated annealing. Science, 220:671 –680, 1983.
 [15] K. C. Kiwiel. Convergence of approximate and incremental subgradient methods for convex optimization. SIAM Journal of Optimization, 14:807 –840, 2003.

[16]
M. Pelikan, D. E. Goldberg, and E. CantuPaz.
Bayesian optimization algorithm, population sizing, and time to
convergence.
In
Proceedings of The Genetic and Evolutionary Computation Conference
, pages 275–282, 2000. 
[17]
Z. Tu.
Learning generative models via discriminative approaches.
In
Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition (CVPR ’07)
, pages 1–8, 2007.  [18] Wikipedia. Wikipedia, the free encyclopedia: Stochastic optimization, 2014.
 [19] Wikipedia. Wikipedia, the free encyclopedia: Test functions for optimization, 2014.

[20]
Wikipedia.
Wikipedia, the free encyclopedia: Multivariate normal distribution, 2015.
 [21] Q. Zhang and H. Muhlenbein. On the convergence of a class of estimation of distribution algorithms. IEEE Trans. Evolutionary Computation, 8:127–136, 2004.