Black-box optimization (also called zero-order optimization) methods have received a great deal of attention in recent years due to their broad applicability to real world problems [18, 15, 10, 7, 9, 11]. When the structure of the objective function is unknown or too complex to model directly, or when gradient information is unavailable or unreliable, such methods are often seen as the last resort because all they require is that the objective function can be evaluated at specific points.
In continuous black-box optimization problems, the state-of-the-art algorithms, such as xNES  and CMA-ES , are all based the same principle [4, 1]: a Gaussian search distribution is repeatedly updated based on the objective function values of sampled points. Usually the full covariance matrix of the distribution is updated, allowing the algorithm to adapt the size and shape of the Gaussian to the local characteristics of the objective function. Full parameterization also provides invariance under affine transformations of the coordinate system, so that ill-shaped, highly non-separable problems can be tackled. However, this generality comes with a price. The number of parameters scales quadratically in the number of dimensions, and the computational cost per sample is at least quadratic in the number of dimensions , and sometimes cubic [4, 16].
This cost is often justified since evaluating the objective function can dominate the computation, and thus the main focus is on the improvement of sampling efficiency. However, there are many problems, such as optimizing weights in neural networks where the dimensionality of the parameter space can be very large (e.g. many thousands of weights), the quadratic cost of updating the search distribution can become the computational bottleneck. One possible remedy is to restrict the covariance matrix to be diagonal, which reduces the computation per function evaluation to , linear in the number of dimensions. Unfortunately, this “diagonal” approach performs poorly when the problem is non-separable because the search distribution cannot follow directions that are not parallel to the current coordinate axes.
In this paper, we propose a new variant of the natural evolution strategy family , termed Rank One NES (R1-NES). This algorithm stays within the general NES framework in that the search distribution is adjusted according to the natural gradient , but it uses a novel parameterization of the covariance matrix,
where and are the parameters to be adjusted. This parameterization allows the predominant eigen direction, , of to be aligned in any direction, enabling the algorithm to tackle highly non-separable problems while maintaining only parameters. We show through rigorous derivation that the natural gradient can also be effectively computed in per sample. R1-NES scales well to high dimensions, and dramatically outperforms diagonal covariance matrix algorithms on non-separable objective functions. As an example, R1-NES reliably solves the the non-convex Rosenbrock function up to dimensions.
2 The NES framework
Natural evolution strategies (NES) are a class of evolutionary algorithms for real-valued optimization that maintain a search distribution, and adapt the distribution parameters by following thenatural gradient of the expected function value. The success of such algorithms is largely attributed to the use of natural gradient, which has the advantage of always pointing in the direction of the steepest ascent, even if the parameter space is not Euclidean. Moreover, compared to regular gradient, natural gradient reduces the weights of gradient components with higher uncertainty, therefore making the update more reliable. As a consequence, NES algorithms can effectively cope with objective functions with ill-shaped landscapes, especially preventing premature convergence on plateaus and avoiding overaggressive steps on ridges .
The general framework of NES is given as follows: At each time step, the algorithm samples new samples , with being the search distribution parameterized by . Let be the objective function to maximize. The expected function value under the search distribution is
Using the log-likelihood trick, the gradient w.r.t. the parameters can be written as
from which we obtain the Monte-Carlo estimate
of the search gradient. The key step of NES then consists of replacing this gradient by the natural gradient
is the Fisher information matrix (See Fig.1 for an illustration). This leads to a straightforward scheme of natural gradient ascent for iteratively updating the parameters
The sequence of 1) sampling an offspring population, 2) computing the corresponding Monte Carlo estimate of the gradient, 3) transforming it into the natural gradient, and 4) updating the search distribution, constitutes one iteration of NES.
3 Natural gradient of the rank-one covariance matrix approximation
In this paper, we consider a special formulation of the covariance matrix
with parameter set
. The special part of the parameterization is the vector, which corresponds to the predominant direction of . This allows the search distribution to be aligned in any direction by adjusting , enabling the algorithm to follow valleys not aligned with the current coordinate axes, which is essential for solving non-separable problems.
Since should always be positive, following the same procedure in , we parameterize , so that can be adjusted freely using gradient descent without worrying about becoming negative. The parameter set is adjusted to accordingly.
From the derivation of , the natural gradient on the sample mean is given by
In the subsequent discussion we always assume for simplicity. It is straightforward to sample from 111For succinctness, we always assume the mean of the search distribution is . This can be achieved easily by shifting the coordinates. by lettting , , then
The inverse of can also be computed easily as
where . Using the relation , the determinant of is
Knowing and allows the log-likelihood to be written explicitly as
The regular gradient with respect to and can then be computed as:
Replacing with , then the Fisher can be computed by marginalizing out i.i.d. standard Gaussian variables and , namely,
Since elements in are essentially polynomials of and , their expectations can be computed analytically222The derivation is tedious, thus omitted here. All derivations are numerically verified using Monte-Carlo simulation., which gives the exact Fisher information matrix
Let , then
The inverse of is thus given by
We apply the formula for block matrix inverse in 
where , and are the Schur complements. Let be partitioned as above, then
and the Shur complements are
whose inverse is given by
Combining the results gives the analytical solution of the inverse Fisher:
Note that computing both and requires only the inner products and , therefore can be done storage and time.
The natural gradient above is obtained with respect to . However, direct gradient update on has an unpleasant property when is in the opposite direction of , which is illustrated in Fig. 2(a). In this case, the gradient tends to shrink . However, if is large, adding the gradient will flip the direction of , and the length of might even grow. This causes numerical problems, especially when is small. A remedy is to separate the length and direction of , namely, reparameterize , where and is the length of . Then the gradient update on will never flip , and thus avoid the problem.
Note that for small change , the update on and can be obtained from
The natural gradient on and is given by letting , thanks to the invariance property:
Note that computing and involves only inner products between vectors, which can also be done linearly in the number of dimensions.
Using the parameterization introduces another problem. When is small, tends to be large, and thus directly updating causes to grow exponentially, resulting in numerical instability, as shown in Fig. 2(b). In this case, the additive update on , rather than the update on is more stable. In our implementation, the additive update on is used if , otherwise the update is on . This solution proved to be numerially stable in all our tests. Algorithm 1 shows the complete R1-NES algorithm in pseudocode.
The R1-NES algorithm was evaluated on the twelve noise-free unimodal functions  in the ‘Black-Box Optimization Benchmarking’ collection (BBOB) from the 2010 GECCO Workshop for Real-Parameter Optimization. In order to make the results comparable those of other methods, the setup in  was used, which transforms the pure benchmark functions to make the parameters non-separable (for some) and avoid trivial optima at the origin.
R1-NES was compared to xNES , SNES  on each benchmark with problem dimensions , (20 runs for each setup), except for xNES, which was only run up . Note that xNES serves as a proper baseline since it is state-of-the-art, achieving performance on par with the popular CMA-ES. The reference machine is an Intel Core i7 processor with 1.6GHz and 4GB of RAM.
Fig. 3 shows the results for the eight benchmarks on which R1-NES performs at least as well as the other methods, and often much better. For dimensionality under , R1-NES is comparable to xNES in terms of the number of fitness evaluations, indicating that the rank-one parameterization of the search distribution effectively captures the local curvature of the fitness function (see Fig.5 for example). However, the time required to compute the update for the two algorithms differs drastically, as depicted in Fig. 4. For example, a typical run of xNES in dimensions takes hours (hence the truncated xNES curves in all graphs), compared to minutes for R1-NES. As a result, R1-NES can solve these problems up to dimensions in acceptable time. In particular, the result on the -dimensional Rosenbrock function is, to our knowledge, the best to date. We estimate that optimizing the 512-dimensional sphere function with xNES (or any other full parameterization method, e.g. CMA-ES) would take over a year in computation time on the same reference hardware. It is also worth pointing out that sNES, though sharing similar, low complexity per function evaluation, can only solve separable problems (Sphere, Linear, AttractiveSector, and Ellipsoid).
Fig. 6 shows four cases (Ellipsoid, StepEllipsoid, RotatedEllipsoid, and Tablet) for which R1-NES is not suited, highlighting a limitation of the algorithm. Three of the four functions are from the Ellipsoid family, where the fitness functions are variants of the type
The eigenvalues of the Hessian span several orders of magnitude, and the parameterization with a single predominant direction is not enough to approximate the Hessian, resulting in poor performance. The other function where R1-NES fails is the Tablet function where all but a one eigendirection has a large eigenvalue. Since the parameterization of R1-NES only allows a single direction to have a large eigenvalue, the shape of the Hessian cannot be effectively approximated.
5 Conclusion and future work
We presented a new black-box optimization algorithm R1-NES that employs a novel parameterization of the search distribution covariance matrix which allows a predominant search direction to be adjusted using the natural gradient with complexity linear in the dimensionality. The algorithm shows excellent performance in a number of high-dimensional non-separable problems that, to date, have not been solved with other parameterizations of similar complexity.
Future work will concentrate on overcoming the limitations of the algorithm (shown in Fig 6). In particular, we intend to extend the algorithm to a) incorporate multiple search directions, and b) enable each search direction to shrink as well as grow.
This research was funded in part by Swiss National Science Foundation grants 200020-122124, 200020-125038, and EU IM-CLeVeR project (#231722).
-  Y. Akimoto, Y. Nagata, I. Ono, and S. Kobayashi. Bidirectional Relation between CMA Evolution Strategies and Natural Evolution Strategies. In Parallel Problem Solving from Nature (PPSN), 2010.
-  S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251–276, 1998.
T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra, and J. Schmidhuber.
Exponential Natural Evolution Strategies.
Genetic and Evolutionary Computation Conference (GECCO), Portland, OR, 2010.
-  T. Glasmaches, T. Schaul, Y. Sun, and J. Schmidhuber. Exponential natural evolution strategies. In GECCO’10, 2010.
-  N. Hansen and A. Auger. Real-parameter black-box optimization benchmarking 2010: Experimental setup, 2010.
-  N. Hansen and S. Finck. Real-parameter black-box optimization benchmarking 2010: Noiseless functions definitions, 2010.
-  N. Hansen, A. S. P. Niederberger, L. Guzzella, and P. Koumoutsakos. A method for handling uncertainty in evolutionary optimization with an application to feedback control of combustion. Trans. Evol. Comp, 13:180–197, 2009.
-  N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.
-  M. Hasenjäger, B. Sendhoff, T. Sonoda, and T. Arima. Three dimensional evolutionary aerodynamic design optimization with CMA-ES. In Proceedings of the 2005 conference on Genetic and evolutionary computation, GECCO ’05, pages 2173–2180, New York, NY, USA, 2005. ACM.
-  M. Jebalia, A. Auger, M. Schoenauer, F. James, and M. Postel. Identification of the isotherm function in chromatography using cma-es. In IEEE Congress on Evolutionary Computation, pages 4289–4296, 2007.
-  J. Klockgether and H. P. Schwefel. Two-phase nozzle and hollow core jet experiments. In Proc. 11th Symp. Engineering Aspects of Magnetohydrodynamics, pages 141–148, 1970.
-  K. B. Petersen and M. S. Pedersen. The matrix cookbook, 2008.
-  R. Ros and N. Hansen. A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity. In R. et al., editor, Parallel Problem Solving from Nature, PPSN X, pages 296–305. Springer, 2008.
-  T. Schaul, T. Glasmachers, and J. Schmidhuber. High Dimensions and Heavy Tails for Natural Evolution Strategies. In To appear in: Genetic and Evolutionary Computation Conference (GECCO), 2011.
-  O. M. Shir and T. Bäck. The second harmonic generation case-study as a gateway for es to quantum control problems. In Proceedings of the 9th annual conference on Genetic and evolutionary computation, GECCO ’07, pages 713–721, New York, NY, USA, 2007. ACM.
-  Y. Sun, D. Wierstra, T. Schaul, and J. Schmidhuber. Stochastic search using the natural gradient. In ICML’09, 2009.
-  D. Wierstra, T. Schaul, J. Peters, and J. Schmidhuber. Natural Evolution Strategies. In Proceedings of the Congress on Evolutionary Computation (CEC08), Hongkong. IEEE Press, 2008.
-  S. Winter, B. Brendel, and C. Igel. Registration of bone structures in 3d ultrasound and ct data: Comparison of different optimization strategies. International Congress Series, 1281:242 – 247, 2005.