1 Introduction
The current approach to designing algorithms is a laborious process. First, the designer must study the problem and devise an algorithm guided by a mixture of intuition, theoretical and/or empirical insight and general design paradigms. She then needs to analyze the algorithm’s performance on prototypical examples and compare it to that of existing algorithms. If the algorithm falls short, she must uncover the underlying cause and find clever ways to overcome the discovered shortcomings. She iterates on this process until she arrives at an algorithm that is superior than existing algorithms. Given the often protracted nature of this process, a natural question to ask is: can we automate it?
In this paper, we focus on automating the design of unconstrained continuous optimization algorithms, which are some of the most powerful and ubiquitous tools used in all areas of science and engineering. Extensive work over the past several decades has yielded many popular methods, like gradient descent, momentum, conjugate gradient and LBFGS. These algorithms share one commonality: they are all handengineered – that is, the steps of these algorithms are carefully designed by human experts. Just as deep learning has achieved tremendous success by automating feature engineering, automating algorithm design could open the way to similar performance gains.
We learn a better optimization algorithm by observing its execution. To this end, we formulate the problem as a reinforcement learning problem. Under this framework, any particular optimization algorithm simply corresponds to a policy. We reward optimization algorithms that converge quickly and penalize those that do not. Learning an optimization algorithm then reduces to finding an optimal policy, which can be solved using any reinforcement learning method. To differentiate the algorithm that performs learning from the algorithm that is learned, we will henceforth refer to the former as the “learning algorithm” or “learner” and the latter as the “autonomous algorithm” or “policy”. We use an offtheshelf reinforcement learning algorithm known as guided policy search levine2014learning , which has demonstrated success in a variety of robotic control settings levine2015end ; finn2015learning ; levine2015learning ; han2015learning . We show empirically that the autonomous optimization algorithm we learn converges faster and/or finds better optima than existing handengineered optimization algorithms.
2 Related Work
Early work has explored the general theme of speeding up learning with accumulation of learning experience. This line of work, known as “learning to learn” or “metalearning” nips1995workshop ; vilalta2002perspective ; brazdil2008metalearning ; thrun2012learning
, considers the problem of devising methods that can take advantage of knowledge learned on other related tasks to train faster, a problem that is today better known as multitask learning and transfer learning. In contrast, the proposed method can learn to accelerate the training procedure itself, without necessarily requiring any training on related auxiliary tasks.
A different line of work, known as “programming by demonstration” cypher1993watch , considers the problem of learning programs from examples of input and output. Several different approaches have been proposed: Liang et al. liang2010learning represents programs explicitly using a formal language, constructs a hierarchical Bayesian prior over programs and performs inference using an MCMC sampling procedure and Graves et al. graves2014neural represents programs implicitly as sequences of memory access operations and trains a recurrent neural net to learn the underlying patterns in the memory access operations. Subsequent work proposes variants of this model that use different primitive memory access operations joulin2015inferring , more expressive operations kurach2015neural ; yang2016lie or other nondifferentiable operations zaremba2015reinforcement ; zaremba2015learning . Others consider building models that permit parallel execution kaiser2015neural or training models with stronger supervision in the form of execution traces reed2015neural . The aim of this line of work is to replicate the behaviour of simple existing algorithms from examples, rather than to learn a new algorithm that is better than existing algorithms.
There is a rich body of work on hyperparameter optimization, which studies the optimization of hyperparameters used to train a model, such as the learning rate, the momentum decay factor and regularization parameters. Most methods
hutter2011sequential ; bergstra2011algorithms ; snoek2012practical ; swersky2013multi ; feurer2015initializing rely on sequential modelbased Bayesian optimization mockus1978application ; brochu2010tutorial , while others adopt a random search approach bergstra2012random or use gradientbased optimization bengio2000gradient ; domke2012generic ; maclaurin2015gradient . Because each hyperparameter setting corresponds to a particular instantiation of an optimization algorithm, these methods can be viewed as a way to search over different instantiations of the same optimization algorithm. The proposed method, on the other hand, can search over the space of all possible optimization algorithms. In addition, when presented with a new objective function, hyperparameter optimization needs to conduct multiple trials with different hyperparameter settings to find the optimal hyperparameters. In contrast, once training is complete, the autonomous algorithm knows how to choose hyperparameters onthefly without needing to try different hyperparameter settings, even when presented with an objective function that it has not seen during training.To the best of our knowledge, the proposed method represents the first attempt to learn a better algorithm automatically.
3 Method
3.1 Preliminaries
In the reinforcement learning setting, the learner is given a choice of actions to take in each time step, which changes the state of the environment in an unknown fashion, and receives feedback based on the consequence of the action. The feedback is typically given in the form of a reward or cost, and the objective of the learner is to choose a sequence of actions based on observations of the current environment that maximizes cumulative reward or minimizes cumulative cost over all time steps.
A reinforcement learning problem is typically formally represented as an Markov decision process (MDP). We consider a finitehorizon MDP with continuous state and action spaces defined by the tuple
, where is the set of states, is the set of actions,is the probability density over initial states,
is the transition probability density, that is, the conditional probability density over successor states given the current state and action, is a function that maps state to cost and is the discount factor.The objective is to learn a stochastic policy , which is a conditional probability density over actions given the current state, such that the expected cumulative cost is minimized. That is,
where the expectation is taken with respect to the joint distribution over the sequence of states and actions, often referred to as a trajectory, which has the density
This problem of finding the costminimizing policy is known as the policy search problem. To enable generalization to unseen states, the policy is typically parameterized and minimization is performed over representable policies. Solving this problem exactly is intractable in all but selected special cases. Therefore, policy search methods generally tackle this problem by solving it approximately.
In many practical settings,
, which characterizes the dynamics, is unknown and must therefore be estimated. Additionally, because it is often equally important to minimize cost at earlier and later time steps, we will henceforth focus on the undiscounted setting, i.e. the setting where
.Guided policy search levine2014learning is a method for performing policy search in continuous state and action spaces under possibly unknown dynamics. It works by alternating between computing a target distribution over trajectories that is encouraged to minimize cost and agree with the current policy and learning parameters of the policy in a standard supervised fashion so that sample trajectories from executing the policy are close to sample trajectories drawn from the target distribution. The target trajectory distribution is computed by iteratively fitting local timevarying linear and quadratic approximations to the (estimated) dynamics and cost respectively and optimizing over a restricted class of linearGaussian policies subject to a trust region constraint, which can be solved efficiently in closed form using a dynamic programming algorithm known as linearquadraticGaussian (LQG). We refer interested readers to levine2014learning for details.
3.2 Formulation
Consider the general structure of an algorithm for unconstrained continuous optimization, which is outlined in Algorithm 1
. Starting from a random location in the domain of the objective function, the algorithm iteratively updates the current location by a step vector computed from some functional
of the objective function, the current location and past locations.This framework subsumes all existing optimization algorithms. Different optimization algorithms differ in the choice of . Firstorder methods use a that depends only on the gradient of the objective function, whereas secondorder methods use a that depends on both the gradient and the Hessian of the objective function. In particular, the following choice of yields the gradient descent method:
where denotes the step size or learning rate. Similarly, the following choice of yields the gradient descent method with momentum:
where again denotes the step size and denotes the momentum decay factor.
Therefore, if we can learn , we will be able to learn an optimization algorithm. Since it is difficult to model general functionals, in practice, we restrict the dependence of on the objective function to objective values and gradients evaluated at current and past locations. Hence, can be simply modelled as a function from the objective values and gradients along the trajectory taken by the optimizer so far to the next step vector.
We observe that the execution of an optimization algorithm can be viewed as the execution of a fixed policy in an MDP: the state consists of the current location and the objective values and gradients evaluated at the current and past locations, the action is the step vector that is used to update the current location, and the transition probability is partially characterized by the location update formula, . The policy that is executed corresponds precisely to the choice of used by the optimization algorithm. For this reason, we will also use to denote the policy at hand. Under this formulation, searching over policies corresponds to searching over all possible firstorder optimization algorithms.
We can use reinforcement learning to learn the policy . To do so, we need to define the cost function, which should penalize policies that exhibit undesirable behaviours during their execution. Since the performance metric of interest for optimization algorithms is the speed of convergence, the cost function should penalize policies that converge slowly. To this end, assuming the goal is to minimize the objective function, we define cost at a state to be the objective value at the current location. This encourages the policy to reach the minimum of the objective function as quickly as possible.
Since the policy
may be stochastic in general, we model each dimension of the action conditional on the state as an independent Gaussian whose mean is given by a regression model and variance is some learned constant. We choose to parameterize the mean of
using a neural net, due to its appealing properties as a universal function approximator and strong empirical performance in a variety of applications. We use guided policy search to learn the parameters of the policy.We use a training set consisting of different randomly generated objective functions. We evaluate the resulting autonomous algorithm on different objective functions drawn from the same distribution.
3.3 Discussion
An autonomous optimization algorithm offers several advantages over handengineered algorithms. First, an autonomous optimizer is trained on real algorithm execution data, whereas handengineered optimizers are typically derived by analyzing objective functions with properties that may or may not be satisfied by objective functions that arise in practice. Hence, an autonomous optimizer minimizes the amount of a priori assumptions made about objective functions and can instead take full advantage of the information about the actual objective functions of interest. Second, an autonomous optimizer has no hyperparameters that need to be tuned by the user. Instead of just computing a step direction which must then be combined with a userspecified step size, an autonomous optimizer predicts the step direction and size jointly. This allows the autonomous optimizer to dynamically adjust the step size based on the information it has acquired about the objective function while performing the optimization. Finally, when an autonomous optimizer is trained on a particular class of objective functions, it may be able to discover hidden structure in the geometry of the class of objective functions. At test time, it can then exploit this knowledge to perform optimization faster.
3.4 Implementation Details
We store the current location, previous gradients and improvements in the objective value from previous iterations in the state. We keep track of only the information pertaining to the previous time steps and use in our experiments. More specifically, the dimensions of the state space encode the following information:

Current location in the domain

Change in the objective value at the current location relative to the objective value at the most recent location for all

Gradient of the objective function evaluated at the most recent location for all
Initially, we set the dimensions corresponding to historical information to zero. The current location is only used to compute the cost; because the policy should not depend on the absolute coordinates of the current location, we exclude it from the input that is fed into the neural net.
We use a small neural net to model the policy. Its architecture consists of a single hidden layer with 50 hidden units. Softplus activation units are used in the hidden layer and linear activation units are used in the output layer. The training objective imposed by guided policy search takes the form of the squared Mahalanobis distance between mean predicted and target actions along with other terms dependent on the variance of the policy. We also regularize the entropy of the policy to encourage deterministic actions conditioned on the state. The coefficient on the regularizer increases gradually in later iterations of guided policy search. We initialize the weights of the neural net randomly and do not regularize the magnitude of weights.
Initially, we set the target trajectory distribution so that the mean action given state at each time step matches the step vector used by the gradient descent method with momentum. We choose the best settings of the step size and momentum decay factor for each objective function in the training set by performing a grid search over hyperparameters and running noiseless gradient descent with momentum for each hyperparameter setting.
For training, we sample 20 trajectories with a length of 40 time steps for each objective function in the training set. After each iteration of guided policy search, we sample new trajectories from the new distribution and discard the trajectories from the preceding iteration.
4 Experiments
We learn autonomous optimization algorithms for various convex and nonconvex classes of objective functions that correspond to loss functions for different machine learning models. We first learn an autonomous optimizer for logistic regression, which induces a convex loss function. We then learn an autonomous optimizer for robust linear regression using the GemanMcClure Mestimator, whose loss function is nonconvex. Finally, we learn an autonomous optimizer for a twolayer neural net classifier with ReLU activation units, whose error surface has even more complex geometry.
4.1 Logistic Regression
We consider a logistic regression model with an regularizer on the weight vector. Training the model requires optimizing the following objective:
where and denote the weight vector and bias respectively, and denote the feature vector and label of the instance, denotes the coefficient on the regularizer and . For our experiments, we choose and . This objective is convex in and .
We train an autonomous algorithm that learns to optimize objectives of this form. The training set consists of examples of such objective functions whose free variables, which in this case are and , are all assigned concrete values. Hence, each objective function in the training set corresponds to a logistic regression problem on a different dataset.
To construct the training set, we randomly generate a dataset of 100 instances for each function in the training set. The instances are drawn randomly from two multivariate Gaussians with random means and covariances, with half drawn from each. Instances from the same Gaussian are assigned the same label and instances from different Gaussians are assigned different labels.
We train the autonomous algorithm on a set of objective functions. We evaluate it on a test set of random objective functions generated using the same procedure and compare to popular handengineered algorithms, such as gradient descent, momentum, conjugate gradient and LBFGS. All baselines are run with the best hyperparameter settings tuned on the training set.
For each algorithm and objective function in the test set, we compute the difference between the objective value achieved by a given algorithm and that achieved by the best of the competing algorithms at every iteration, a quantity we will refer to as “the margin of victory”. This quantity is positive when the current algorithm is better than all other algorithms and negative otherwise. In Figure (a)a, we plot the mean margin of victory of each algorithm at each iteration averaged over all objective functions in the test set. We find that conjugate gradient and LBFGS diverge or oscillate in rare cases (on 6% of the objective functions in the test set), even though the autonomous algorithm, gradient descent and momentum do not. To reflect performance of these baselines in the majority of cases, we exclude the offending objective functions when computing the mean margin of victory.
As shown, the autonomous algorithm outperforms gradient descent, momentum and conjugate gradient at almost every iteration. The margin of victory of the autonomous algorithm is quite high in early iterations, indicating that the autonomous algorithm converges much faster than other algorithms. It is interesting to note that despite having seen only trajectories of length 40 at training time, the autonomous algorithm is able to generalize to much longer time horizons at test time. LBFGS converges to slightly better optima than the autonomous algorithm and the momentum method. This is not surprising, as the objective functions are convex and LBFGS is known to be a very good optimizer for convex optimization problems.
We show the performance of each algorithm on two objective functions from the test set in Figures (b)b and (c)c. In Figure (b)b, the autonomous algorithm converges faster than all other algorithms. In Figure (c)c, the autonomous algorithm initially converges faster than all other algorithms but is later overtaken by LBFGS, while remaining faster than all other optimizers. However, it eventually achieves the same objective value as LBFGS, while the objective values achieved by gradient descent and momentum remain much higher.
4.2 Robust Linear Regression
Next, we consider the problem of linear regression using a robust loss function. One way to ensure robustness is to use an Mestimator for parameter estimation. A popular choice is the GemanMcClure estimator, which induces the following objective:
where and denote the weight vector and bias respectively, and denote the feature vector and label of the instance and is a constant that modulates the shape of the loss function. For our experiments, we use and . This loss function is not convex in either or .
As with the preceding section, each objective function in the training set is a function of the above form with realized values for and . The dataset for each objective function is generated by drawing 25 random samples from each one of four multivariate Gaussians, each of which has a random mean and the identity covariance matrix. For all points drawn from the same Gaussian, their labels are generated by projecting them along the same random vector, adding the same randomly generated bias and perturbing them with i.i.d. Gaussian noise.
The autonomous algorithm is trained on a set of 120 objective functions. We evaluate it on 100 randomly generated objective functions using the same metric as above. As shown in Figure (a)a, the autonomous algorithm outperforms all handengineered algorithms except at early iterations. While it dominates gradient descent, conjugate gradient and LBFGS at all times, it does not make progress as quickly as the momentum method initially. However, after around 30 iterations, it is able to close the gap and surpass the momentum method. On this optimization problem, both conjugate gradient and LBFGS diverge quickly. Interestingly, unlike in the previous experiment, LBFGS no longer performs well, which could be caused by nonconvexity of the objective functions.
Figures (b)b and (c)c show performance on objective functions from the test set. In Figure (b)b, the autonomous optimizer not only converges the fastest, but also reaches a better optimum than all other algorithms. In Figure (c)c, the autonomous algorithm converges the fastest and is able to avoid most of the oscillations that hamper gradient descent and momentum after reaching the optimum.
4.3 Neural Net Classifier
Finally, we train an autonomous algorithm to train a small neural net classifier. We consider a twolayer neural net with ReLU activation on the hidden units and softmax activation on the output units. We use the crossentropy loss combined with regularization on the weights. To train the model, we need to optimize the following objective:
where denote the firstlayer and secondlayer weights and biases, and denote the input and target class label of the instance, denotes the coefficient on regularizers and denotes the component of . For our experiments, we use and . The error surface is known to have complex geometry and multiple local optima, making this a challenging optimization problem.
The training set consists of 80 objective functions, each of which corresponds to the objective for training a neural net on a different dataset. Each dataset is generated by generating four multivariate Gaussians with random means and covariances and sampling 25 points from each. The points from the same Gaussian are assigned the same random label of either 0 or 1. We make sure not all of the points in the dataset are assigned the same label.
We evaluate the autonomous algorithm in the same manner as above. As shown in Figure (a)a, the autonomous algorithm significantly outperforms all other algorithms. In particular, as evidenced by the sizeable and sustained gap between margin of victory of the autonomous optimizer and the momentum method, the autonomous optimizer is able to reach much better optima and is less prone to getting trapped in local optima compared to other methods. This gap is also larger compared to that exhibited in previous sections, suggesting that handengineered algorithms are more suboptimal on challenging optimization problems and so the potential for improvement from learning the algorithm is greater in such settings. Due to nonconvexity, conjugate gradient and LBFGS often diverge.
Performance on examples of objective functions from the test set is shown in Figures (b)b and (c)c. As shown, the autonomous optimizer is able to reach better optima than all other methods and largely avoids oscillations that other methods suffer from.
5 Conclusion
We presented a method for learning a better optimization algorithm. We formulated this as a reinforcement learning problem, in which any optimization algorithm can be represented as a policy. Learning an optimization algorithm then reduces to find the optimal policy. We used guided policy search for this purpose and trained autonomous optimizers for different classes of convex and nonconvex objective functions. We demonstrated that the autonomous optimizer converges faster and/or reaches better optima than handengineered optimizers. We hope autonomous optimizers learned using the proposed approach can be used to solve various common classes of optimization problems more quickly and help accelerate the pace of innovation in science and engineering.
References
 [1] Jonathan Baxter, Rich Caruana, Tom Mitchell, Lorien Y Pratt, Daniel L Silver, and Sebastian Thrun. NIPS 1995 workshop on learning to learn: Knowledge consolidation and transfer in inductive systems. https://web.archive.org/web/20000618135816/http://www.cs.cmu.edu/afs/cs.cmu.edu/user/caruana/pub/transfer.html, 1995. Accessed: 20151205.
 [2] Yoshua Bengio. Gradientbased optimization of hyperparameters. Neural computation, 12(8):1889–1900, 2000.
 [3] James Bergstra and Yoshua Bengio. Random search for hyperparameter optimization. The Journal of Machine Learning Research, 13(1):281–305, 2012.
 [4] James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyperparameter optimization. In Advances in Neural Information Processing Systems, pages 2546–2554, 2011.
 [5] Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta. Metalearning: applications to data mining. Springer Science & Business Media, 2008.
 [6] Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
 [7] Allen Cypher and Daniel Conrad Halbert. Watch what I do: programming by demonstration. MIT press, 1993.
 [8] Justin Domke. Generic methods for optimizationbased modeling. In AISTATS, volume 22, pages 318–326, 2012.
 [9] Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Initializing bayesian hyperparameter optimization via metalearning. In AAAI, pages 1128–1135, 2015.
 [10] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015.
 [11] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014.
 [12] Weiqiao Han, Sergey Levine, and Pieter Abbeel. Learning compound multistep controllers under unknown dynamics. In International Conference on Intelligent Robots and Systems, 2015.
 [13] Frank Hutter, Holger H Hoos, and Kevin LeytonBrown. Sequential modelbased optimization for general algorithm configuration. In Learning and Intelligent Optimization, pages 507–523. Springer, 2011.
 [14] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stackaugmented recurrent nets. In Advances in Neural Information Processing Systems, pages 190–198, 2015.
 [15] Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
 [16] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural randomaccess machines. arXiv preprint arXiv:1511.06392, 2015.

[17]
Sergey Levine and Pieter Abbeel.
Learning neural network policies with guided policy search under unknown dynamics.
In Advances in Neural Information Processing Systems, pages 1071–1079, 2014.  [18] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. Endtoend training of deep visuomotor policies. arXiv preprint arXiv:1504.00702, 2015.
 [19] Sergey Levine, Nolan Wagener, and Pieter Abbeel. Learning contactrich manipulation skills with guided policy search. arXiv preprint arXiv:1501.05611, 2015.
 [20] Percy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML10), pages 639–646, 2010.
 [21] Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradientbased hyperparameter optimization through reversible learning. arXiv preprint arXiv:1502.03492, 2015.
 [22] Jonas Mockus, Vytautas Tiesis, and Antanas Zilinskas. The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117129):2, 1978.
 [23] Scott Reed and Nando de Freitas. Neural programmerinterpreters. arXiv preprint arXiv:1511.06279, 2015.
 [24] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959, 2012.
 [25] Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multitask bayesian optimization. In Advances in neural information processing systems, pages 2004–2012, 2013.
 [26] Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
 [27] Ricardo Vilalta and Youssef Drissi. A perspective view and survey of metalearning. Artificial Intelligence Review, 18(2):77–95, 2002.
 [28] Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016.
 [29] Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. arXiv preprint arXiv:1511.07275, 2015.
 [30] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015.
Comments
There are no comments yet.