1 Introduction
In many learning tasks we face uncertainty about the loss we aim to optimize. Consider, for example, a classification task such as character recognition, required to perform well under various types of distortion. In some environments, such as recognizing characters in photos, the classifier must handle rotation and patterned backgrounds. In a different environment, such as lowresolution images, it is more likely to encounter noisy pixelation artifacts. Instead of training a separate classifier for each possible scenario, one seeks to optimize performance in the worst case over different forms of corruption (or combinations thereof) made available to the trainer as blackboxes.
More generally, our goal is to find a minimax solution that optimizes in the worst case over a given family of functions. Even if each individual function can be optimized effectively, it is not clear such solutions would perform well in the worst case. In many cases of interest, individual objectives are nonconvex and hence stateoftheart methods are only approximate. In Bayesian optimization, where one must optimize a distribution over loss functions, approximate Bayesian optimization is often straightforward, since loss functions are commonly closed under convex combination. Can approximately optimal solutions yield an approximately optimal
robust solution?In this paper we develop a reduction from robust optimization to Bayesian optimization. Given an approximate oracle for Bayesian optimization we show how to implement an approximate solution for robust optimization under a necessary extension, and illustrate its effectiveness in applications.
Main Results.
Given an approximate Bayesian oracle for distributions over (potentially nonconvex) loss functions, we show how to solve approximate robust optimization in a convexified solution space. This outcome is “improper” in the sense that it may lie outside the original solution space, if the space is nonconvex. This can be interpreted as computing a distribution over solutions. We show that the relaxation to improper learning is necessary in general: It is NPhard to achieve robust optimization with respect to the original outcome space, even if Bayesian optimization can be solved exactly, and even if there are only polynomially many loss functions. We complement this by showing that in any statistical learning scenario where loss is convex in the predicted dependent variable, we can find a single (deterministic) solution with matching performance guarantees.
Technical overview.
Our approach employs an execution of noregret dynamics on a zerosum game, played between a learner equipped with an approximate Bayesian oracle, and an adversary who aims to find a distribution over loss functions that maximizes the learner’s loss. This game converges to an approximately robust solution, in which the learner and adversary settle upon an approximate minimax solution. This convergence is subject to an additive regret term that converges at a rate of over rounds of the learning dynamics.
Applications.
We illustrate the power of our reduction through two main examples. We first consider statistical learning via neural networks. Given an arbitrary training method, our reduction generates a net that optimizes robustly over a given class of loss functions. We evaluate our method experimentally on a character recognition task, where the loss functions correspond to different corruption models made available to the learner as black boxes. We verify experimentally that our approach significantly outperforms various baselines, including optimizing for average performance and optimizing for each loss separately. We also apply our reduction to influence maximization, where the goal is to maximize a concave function (the independent cascade model of influence KempeKT03 ) over a nonconvex space (subsets of vertices in a network). Previous work has studied robust influence maximization directly HeKempe16 ; ChenLTZZ16 ; LowalekarVK16 , focusing on particular, natural classes of functions (e.g., edge weights chosen within a given range) and establishing hardness and approximation results. In comparison, our method is agnostic to the particular class of functions, and achieves a strong approximation result by returning a distribution over solutions. We evaluate our method on real and synthetic datasets, with the goal of robustly optimizing a suite of random influence instantiations. We verify experimentally that our approach significantly outperforms natural baselines.
Related work.
There has recently been a great deal of interest in robust optimization in machine learning
SW16 ; CDLZ16 ; ND16 ; SD15 . For continuous optimization, the work that is closest to ours is perhaps that by ShalevShwartz and Wexler SW16 and Namkoong and Duchi ND16 that use robust optimization to train against convex loss functions. The main difference is that we assume a more general setting in which the loss functions are nonconvex and one is only given access to the Bayesian oracle. Hence, the proof techniques and general results from these papers do not apply to our setting. We note that our result generalizes these works, as they can be considered as the special case in which we have a distributional oracle whose approximation is optimal. In submodular optimization there has been a great deal of interest in robust optimization as well KMGG07 ; HK16 ; CLTZZ16 . The work closest to ours is that by He and Kempe HK16 who consider a slightly different objective than ours. Kempe and He’s results apply to influence but do not extend to general submodular functions. Finally, we note that unlike recent work on nonconvex optimization HLS15 ; ZH16 ; HLS16 our goal in this paper is not to optimize a nonconvex function. Rather, we abstract the nonconvex guarantees via the approximate Bayesian oracle.2 Robust Optimization with Approximate Bayesian Oracles
We consider the following model of optimization that is robust to objective uncertainty. There is a space over which to optimize, and a finite set of loss functions^{1}^{1}1We describe an extension to infinite sets of loss functions in Appendix B. Our results also extend naturally to the goal of maximizing the minimum of a class of reward functions. where each is a function from to . Intuitively, our goal is to find some that achieves low loss in the worstcase over loss functions in . For , write for the worstcase loss of . The minimax optimum is given by
(1) 
The goal of approximate robust optimization is to find such that .
Given a distribution over solutions , write for the worstcase expected loss of a solution drawn from . A weaker version of robust approximation is improper robust optimization: find a distribution over such that .
Our results take the form of reductions to an approximate Bayesian oracle, which finds a solution that approximately minimizes a given distribution over loss functions.^{2}^{2}2All our results easily extend to the case where the oracle computes a solution that is approximately optimal up to an additive error, rather than a multiplicative one. For simplicity of exposition we present the multiplicative error case as it is more in line with the literature on approximation algorithms.
Definition 1 (Approximate Bayesian Oracle).
Given a distribution over , an approximate Bayesian Oracle computes such that
(2) 
2.1 Improper Robust Optimization with Oracles
We first show that, given access to an approximate distributional oracle, it is possible to efficiently implement improper approximate robust optimization, subject to a vanishing additive loss term.
(3) 
Theorem 1.
Proof.
We give the proof of the first result and defer the second result to Theorem 6 in Appendix A. We can interpret Algorithm 1 in the following way. We define a zerosum game between a learner and an adversary. The learner’s action set is equal to and the adversary’s action set is equal to . The loss of the learner when he picks and the adversary picks is defined as . The corresponding payoff of the adversary is .
We will run noregret dynamics on this zerosum game, where at every iteration , the adversary will pick a distribution over functions and subsequently the learner picks a solution . For simpler notation we will denote with
the probability density function on
associated with the distribution of the adversary. That is, is the probability of picking function . The adversary picks a distribution based on some arbitrary noregret learning algorithm on the actions in . For concreteness consider the case where the adversary picks a distribution based on the multiplicative weight updates algorithm, i.e.,(6) 
Subsequently the learner picks a solution that is the output of the approximate distributional oracle on the distribution selected by the adversary at timestep . That is,
(7) 
Write . By the guarantees of the noregret algorithm for the adversary, we have that
(8) 
Combining the above with the guarantee of the distributional oracle we have
(By oracle guarantee for each )  
(By noregret of adversary) 
Thus, if we define with to be the uniform distribution over , then we have derived
(9) 
as required. ∎
A corollary of Theorem 1 is that if the solution space is convex and the objective functions are all convex functions, then we can compute a single solution that is approximately minimax optimal. Of course, in this setting one can calculate and optimize the maximum loss directly in time proportional to ; this result therefore has the most bite when the set of functions is large.
Corollary 2.
If the space is a convex space and each loss function is a convex function, then the point , where are the output of Algorithm 1, satisfies:
(10) 
Proof.
By Theorem 1, we get that if is the uniform distribution over then
Since is convex, the solution is also part of . Moreover, since each is convex, we have that . We therefore conclude
as required. ∎
2.2 Robust Statistical Learning
Next we apply our main theorem to statistical learning. Consider regression or classification settings where data points are pairs ,
is a vector of features, and
is the dependent variable. The solution space is then a space of hypotheses , with each a function from to . We also assume that is a convex subset of a finitedimensional vector space.We are given a set of loss functions , where each is a functional . Theorem 1 implies that, given an approximate Bayesian optimization oracle, we can compute a distribution over hypotheses from that achieves an approximate minimax guarantee. If the loss functionals are convex over hypotheses, then we can compute a single ensemble hypothesis (possibly from a larger space of hypotheses, if is nonconvex) that achieves this guarantee.
Theorem 3.
Suppose that are convex functionals. Then the ensemble hypothesis , where are the hypotheses output by Algorithm 1 given an approximate Bayesian oracle, satisfies
(11) 
Proof.
The proof is similar to the proof of Corollary 2. ∎
We emphasize that the convexity condition in Theorem 3 is over the class of hypotheses, rather than over features or any natural parameterization of
(such as weights in a neural network). This is a mild condition that applies to many examples in statistical learning theory. For instance, consider the case where each loss
is the expected value of some expost loss function given a distribution over :(12) 
In this case, it is enough for the function to be convex with respect to its first argument (i.e., the predicted dependent variable). This is satisfied by most loss functions used in machine learning, such as multinomial logistic loss (crossentropy loss) from multiclass classification, the hinge or the square loss, or squared loss as used in regression. For all these settings, Theorem 3 provides a tool for improper robust learning, where the final hypothesis is an ensemble of base hypotheses from . Again, the underlying optimization problem can be arbitrarily nonconvex in the natural parameters of the hypothesis space; in Section 3.1 we will show how to apply this approach to robust training of neural networks, where the Bayesian oracle is simply a standard network training method. For neural networks, the fact that we achieve improper learning (as opposed to standard learning) corresponds to training a neural network with a single extra layer relative to the networks generated by the oracle.
2.3 Robust Submodular Maximization
In robust submodular maximization we are given a family of reward functions , where each is a monotone submodular function from a ground set of elements to . Each function is assumed to be monotone and submodular, i.e., for any , ; and for any , . The goal is to select a set of size whose worstcase value over , i.e., , is at least a factor of the minimax optimum .
This setting is a special case of our general robust optimization setting (phrased in terms of rewards rather than losses). The solution space is equal to the set of subsets of size among all elements in and the set is the set of possible objective functions. The Bayesian oracle 1, instantiated in this setting, asks for the following: given a convex combination of submodular functions , compute a set such that .
Computing the maximum value set of size is NPhard even for a single submodular function. The following very simple greedy algorithm computes a approximate solution Nemhauser1978 : begin with , and at each iteration add to the current solution the element that has the largest marginal contribution: . Moreover, this approximation ratio is known to be the best possible in polynomial time NemhauserW78b . Since a convex combination of monotone submodular functions is also a monotone submodular function, we immediately get that there exists a approximate Bayesian oracle that can be computed in polynomial time. The algorithm is formally given in Algorithm 2.
Combining the above with Theorem 1 we get the following corollary.
Corollary 4.
Algorithm 1, with Bayesian oracle , computes in time a distribution over sets of size , defined as a uniform distribution over a set , such that
(13) 
As we show in Appendix C, computing a single set that achieves a approximation to is also hard. This is true even if the functions are additive. However, by allowing a randomized solution over sets we can achieve a constant factor approximation to in polynomial time.
Since the functions are monotone, the above result implies a simple way of constructing a single set that is of larger size than , which deterministically achieves a constant factor approximation to . The latter holds by simply taking the union of the sets in the support of the distribution returned by Algorithm 1. We get the following bicriterion approximation scheme.
Corollary 5.
Suppose that we run the reward version of Algorithm 1, with and for , returning . Then the set , which is of size at most , satisfies
(14) 
3 Experiments
3.1 Robust Classification with Neural Networks
A classic application of our robust optimization framework is classification with neural networks for corrupted or perturbed datasets. We have a data set of pairs of an image and label that can be corrupted in different ways which produces data sets . The hypothesis space is the set of all neural nets of some fixed architecture and for each possible assignment of weights. We denote each such hypothesis with for , with being the number of parameters (weights) of the neural net. If we let be the uniform distribution over each corrupted data set , then we are interested in minimizing the empirical crossentropy (aka multinomial logistic) loss in the worst case over these different distributions . The latter is a special case of our robust statistical learning framework from Section 2.2.
Training a neural network is a nonconvex optimization problem and we have no guarantees on its performance. We instead assume that for any given distribution over pairs of images and labels and for any loss function , training a neural net with stochastic gradient descent run on images drawn from can achieve an approximation to the optimal expected loss, i.e. . Notice that this implies an approximate Bayesian Oracle for the corrupted dataset robust training problem: for any distribution w over the different corruptions , the Bayesian oracle asks to give an approximation to the minimization problem:
(15) 
The latter is simply another expected loss problem with distribution over images being the mixture distribution defined by first drawing a corruption index from w and then drawing a corrupted image from distribution . Hence, our oracle assumption implies that SGD on this mixture is an approximation. By linearity of expectation, an alternative way of viewing the Bayesian oracle problem is that we are training a neural net on the original distribution of images, but with loss function being the weighted combination of loss functions , where is the th corrupted version of image . In our experiments we implemented both of these interpretations of the Bayesian oracle, which we call the Hybrid Method and Composite Method, respectively, when designing our neural network training scheme (see Figure 4 and Figure 5 in Appendix E). Finally, because we use the crossentropy loss, which is convex in the prediction of the neural net, we can also apply Theorem 3 to get that the ensemble neural net, which takes the average of the predictions of the neural nets created at each iteration of the robust optimization, will also achieve good worstcase loss (we refer to this as Ensemble Bottleneck Loss).
Experiment Setup.
We use the MNIST handwritten digits data set containing training images, validation images, and test images, each image being a pixel grayscale image. The intensities of these pixels (ranging from to ) are used as input to a neural network that has nodes in its one hidden layer. The output layer uses the softmax function to give a distribution over digits to
. The activation function is ReLU and the network is trained using Gradient Descent with learning parameter
through iterations of minibatches of size .In general, the corruptions can be any blackbox corruption of the image. In our experiments, we consider four four types of corruption (). See Appendix E for details about corruptions.
Baselines.
We consider three baselines: (i) Individual Corruption: for each corruption type , we construct an oracle that trains a neural network using the training data perturbed by corruption , and then returns the trained network weights as , for every . This gives baselines, one for each corruption type; (ii) Even Split: this baseline alternates between training with different corruption types between iterations. In particular, call the previous baseline oracles . Then this new baseline oracle will produce with , where , for every ; (iii) Uniform Distribution: This more advanced baseline runs the robust optimization scheme with the Hybrid Method (see Appendix), but without the distribution updates. Instead, the distribution over corruption types is fixed as the discrete uniform over all iterations. This allows us to check if the multiplicative weight updates in the robust optimization algorithm are providing benefit.
Results.
The Hybrid and Composite Methods produce results far superior to all three baseline types, with differences both substantial in magnitude and statistically significant. The more sophisticated Composite Method outperforms the Hybrid Method. Increasing improves performance, but with diminishing returns–largely because for sufficiently large , the distribution over corruption types has moved from the initial uniform distribution to some more optimal stable distribution (see Appendix for details). All these effects are consistent across the 4 different corruption sets tested. The Ensemble Bottleneck Loss is empirically much smaller than Individual Bottleneck Loss. For the best performing algorithm, the Composite Method, the mean Ensemble Bottleneck Loss (mean Individual Bottleneck Loss) with was 0.34 (1.31) for Background Set, 0.28 (1.30) for Shrink Set, 0.19 (1.25) for Pixel Set, and 0.33 (1.25) for Mixed Set. Thus combining the classifiers obtained from robust optimization is practical for making predictions on new data.
3.2 Robust Influence Maximization
We apply the results of Section 2.3 to the robust influence maximization problem. Given a directed graph , the goal is to pick a seed set of nodes that maximize an influence function , where is the expected number of individuals influenced by opinion of the members of . We used to be the number of nodes reachable from (our results extend to other models).
In robust influence maximization, the goal is to maximize influence in the worstcase (Bottleneck Influence) over functions , corresponding to graphs , for some fixed seed set of size . This is a special case of robust submodular maximization after rescaling to .
Experiment Setup.
Given a base directed graph , we produce graphs by randomly including each edge with some probability . We consider two base graphs and two sets of parameters for each: (i) The Wikipedia Vote Graph snap . In Experiment , the parameters are , , , and . In Experiment , change and . (ii) The Complete Directed Graph on vertices. In Experiment , the parameters are , and . In Experiment , change and .
Baselines.
We compared our algorithm (Section 2.3) to three baselines: (i) Uniform over Individual Greedy Solutions: Apply greedy maximization (Algorithm 2) on each graph separately, to get solutions . Return the uniform distribution over these solutions; (ii) Greedy on Uniform Distribution over Graphs: Return the output of greedy submodular maximization (Algorithm 2) on the uniform distribution over influence functions. This can be viewed as maximizing expected influence; (iii) Uniform over Greedy Solutions on Multiple Perturbed Distributions: Generate distributions over the functions, by randomly perturbing the uniform distribution. Perturbation magnitudes were chosen s.t. has the same expected distance from uniform as the distribution returned by robust optimization at iteration .
Results.
For both graph experiments, robust optimization outperforms all baselines on Bottleneck Influence; the difference is statistically significant as well as large in magnitude for all . Moreover, the individual seed sets generated at each iteration of robust optimization themselves achieve empirically good influence as well (see Appendix for details).
References
 [1] Zeyuan Allen Zhu and Elad Hazan. Variance reduction for faster nonconvex optimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016, pages 699–707, 2016.
 [2] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a metaalgorithm and applications. Theory of Computing, 8(6):121–164, 2012.
 [3] Sabyasachi Chatterjee, John C. Duchi, John D. Lafferty, and Yuancheng Zhu. Local minimax complexity of stochastic convex optimization. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 510, 2016, Barcelona, Spain, pages 3423–3431, 2016.
 [4] Wei Chen, Tian Lin, Zihan Tan, Mingfei Zhao, and Xuren Zhou. Robust influence maximization. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 1317, 2016, pages 795–804, 2016.
 [5] Wei Chen, Tian Lin, Zihan Tan, Mingfei Zhao, and Xuren Zhou. Robust influence maximization. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 1317, 2016, pages 795–804, 2016.
 [6] Elad Hazan, Kfir Y. Levy, and Shai ShalevShwartz. Beyond convexity: Stochastic quasiconvex optimization. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada, pages 1594–1602, 2015.
 [7] Elad Hazan, Kfir Yehuda Levy, and Shai ShalevShwartz. On graduated optimization for stochastic nonconvex problems. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016, pages 1833–1841, 2016.
 [8] Xinran He and David Kempe. Robust influence maximization. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 1317, 2016, pages 885–894, 2016.
 [9] Xinran He and David Kempe. Robust influence maximization. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 1317, 2016, pages 885–894, 2016.
 [10] David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’03, pages 137–146, New York, NY, USA, 2003. ACM.
 [11] Andreas Krause, H. Brendan McMahan, Carlos Guestrin, and Anupam Gupta. Selecting observations against adversarial objectives. In Advances in Neural Information Processing Systems 20, Proceedings of the TwentyFirst Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 36, 2007, pages 777–784, 2007.
 [12] Jure Leskovec. Wikipedia vote network. Stanford Network Analysis Project.
 [13] Meghna Lowalekar, Pradeep Varakantham, and Akshat Kumar. Robust influence maximization: (extended abstract). In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, Singapore, May 913, 2016, pages 1395–1396, 2016.
 [14] Hongseok Namkoong and John C. Duchi. Stochastic gradient methods for distributionally robust optimization with fdivergences. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 510, 2016, Barcelona, Spain, pages 2208–2216, 2016.
 [15] G. L. Nemhauser and L. A. Wolsey. Best algorithms for approximating the maximum of a submodular set function. Mathematics of Operations Research, 3(3):177–188, 1978.
 [16] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978.
 [17] Shai ShalevShwartz and Yonatan Wexler. Minimizing the maximal loss: How and why. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 1924, 2016, pages 793–801, 2016.

[18]
Jacob Steinhardt and John C. Duchi.
Minimax rates for memorybounded sparse linear regression.
In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 36, 2015, pages 1564–1587, 2015.
Appendix A Faster Convergence to Approximate Solution
Theorem 6 (Faster Convergence).
Given access to an approximate distributional oracle, Algorithm 1 with some parameter computes a distribution over solutions, defined as a uniform distribution over a set , such that
(16) 
In the case of robust reward maximization, the reward version of Algorithm 1 computes a distribution such that:
(17) 
Proof.
We present the case of lossses as the result for the case of rewards follows along similar lines. The proof follows similar lines as that of Theorem 1. The main difference is that we use a stronger property of the Exponential Weight Updates algorithm. In particular it is known that the regret of EWU, when run on a sequence of rewards that lie in is at most [2]:
(18) 
where the second inequality follows from the fact that . Thus, by the definition of regret, we can write:
(19) 
Combining the above with the guarantee of the distributional oracle we have
(By oracle guarantee for each )  
(By regret of adversary) 
Thus, if we define with to be the uniform distribution over , then we have derived:
(20) 
as required. ∎
Appendix B Robust Optimization with Infinite Loss Sets
We now extend our main results to the case where the uncertainty about the loss function is more general. In particular, we allow for sets of possible losses that are not necessary finite. In particular, the loss function depends on a parameter that is unknown and which could take any value in a set . The loss of the learner is a function of both his action and this parameter , and the form of the function is known. Hence, the set of possible losses is defined as:
(21) 
Our goal is to find some that achieves low loss in the worstcase over loss functions in . For , write for the worstcase loss of . The minimax optimum is
(22) 
Our goal in approximate robust optimization is to find such that . Given a distribution over solutions , write for the worstcase expected loss of a solution drawn from . The goal of improper robust optimization: find a distribution over solutions such that .
We will make the assumption that is concave in , Lipschitz with respect to and that the set is convex. The case of finite losses that we considered in the main text is a special case where the space is the simplex on coordinates, and where: .
We will also assume that we are given access to an approximate Bayesian oracle, which finds a solution that approximately minimizes a given distribution over loss functions:
Definition 2 (Approximate Bayesian Oracle).
Given a choice of , the oracle computes an approximate solution to the known parameter problem, i.e.:
(23) 
b.1 Improper Robust Optimization with Oracles
We first show that, given access to an approximate distributional oracle, it is possible to efficiently implement improper approximate robust optimization, subject to a vanishing additive loss term. The algorithm is a variant of Algorithm 1, where we replace the Multiplicative Weight Updates algorithm for the choice of with a projected gradient descent algorithm, which works for any convex set . To describe the algorithm we will need some notation. First we denote with to be the projection of on the set , i.e. . Moreover, is the gradient of function with respect to .
(24)  
(25) 
Theorem 7.
Given access to an approximate distributional oracle, Algorithm 3, with computes a distribution over solutions, defined as a uniform distribution over a set , such that:
(26) 
Proof.
We can interpret Algorithm 1 in the following way. We define a zerosum game between a learner and an adversary. The learner’s action set is equal to and the adversaries action set is . The loss of the learner when he picks and the adversary picks is defined as . The corresponding payoff of the adversary is .
We will run noregret dynamics on this zerosum game, where at every iteration , the adversary will pick a and subsequently the learner picks a solution . We will be using the projected gradient descent algorithm to compute what is at each iteration, as defined in Equations (24) and (25). Subsequently the learner picks a solution that is the output of the approximate Bayesian oracle on the parameter chosen by the adversary at timestep . That is,
(27) 
By the regret guarantees of the projected gradient descent algorithm for the adversary, we have that:
(28) 
for . Combining the above with the guarantee of the distributional oracle we have
(By oracle guarantee for each )  
(By noregret of adversary) 
Thus if we define with to be the uniform distribution over , then we have derived that
(29) 
as required. ∎
A corollary of Theorem 1 is that if the solution space is convex and the function is also convex in for every , then we can compute a single solution that is approximately minimax optimal.
Corollary 8.
If the space is a convex space and the function is convex in for any , then the point , where are the output of Algorithm 3, satisfies:
(30) 
Proof.
By Theorem 7, we get that if is the uniform distribution over then
Since is convex, the solution is also part of . Moreover, since each is convex in , we have that . We therefore conclude
as required. ∎
Our results for improper statistical learning can also be analogously generalized to this more general loss uncertainty.
Appendix C NPHardness of Proper Robust Optimization
The convexity assumption of Corollary 2 is necessary. In general, achieving any nontrivial expost robust solution is computationally infeasible, even when there are only polynomially many loss functions and they are all concave.
Theorem 9.
There exists a constant for which the following problem is NPhard. Given a collection of linear loss functions over a ground set of elements, and an optimal distributional oracle over feasibility set , find a solution such that
Proof.
We reduce from the set packing problem, in which there is a collection of sets over a ground set of elements , and the goal is to find a collection of sets that are all pairwise disjoint. This problem is known to be NPhard, even if we assume .
Given an instance of the set packing problem, we define an instance of robust loss minimization as follows. There is a collection of linear functions , and is a set of items, say . The linear functions are given by for all and , for all and all , if , and if .
We claim that in this setting, an optimal Bayesian oracle can be implemented in polynomial time. Indeed, let be any distribution over , and let be any function with minimum probability under . Then the set minimizes the expected loss under . This is because the contribution of any given element to the loss is equal to times the probability of under , which is at most for the lowestprobability element, whereas the loss due to any element is at least . Thus, since the optimal Bayesian oracle is polytime implementable, it suffices to show NPhardness without access to such an oracle.
To establish hardness, note that if a set packing exists, then the solution to the robust optimization problem given by satisfies . On the other hand, if a set packing does not exist, then any solution for the robust optimization problem either contains an element — in which case — or must contain at least two elements such that , which implies there exists some such that . We can therefore reduce the set packing problem to the problem of determining whether the minimax optimum is greater than or less than . We conclude that it is NPhard to find any such that . ∎
Similarly, for robust submodular maximization, in order to achieve a nontrivial approximation guarantee it is necessary to either convexify the outcome space (e.g., by returning distributions over solutions) or extend the solution space to allow solutions that are larger by a factor of . This is true even when there are only polynomially many functions to optimize over, and even when they are all linear.
Theorem 10.
There exists a constant for which the following problem is NPhard. Given any , and a collection of linear functions over a ground set of elements, and an optimal distributional oracle over subsets of of size , find a subset with such that
Proof.
We reduce from the set cover problem, in which there is a collection of sets over a ground set of elements , whose union is , and the goal is to find a collection of at most sets whose union is . There exists a constant such that it is NPhard to distinguish between the case where such a collection exists, and no collection of size at most exists.
Given an instance of the set cover problem, we define an instance of the robust linear maximization problem as follows. There is a collection of linear functions , and is a set of items, say . For each and , set and for all . For each and , set if in our instance of the set cover problem, and otherwise.
We claim that in this setting, an optimal Bayesian oracle can be implemented in polynomial time. Indeed, let be any distribution over , and suppose is any function with maximum probability under . Then the set maximizes expected value under . This is because the value of any given element is at least times the probability of under , which is at least , whereas the value of any element is at most . Thus, since the optimal Bayesian oracle is polytime implementable, it suffices to show NPhardness without access to such an oracle.
To establish hardness, note first that if a solution to the set cover problem exists, then the solution to the robust optimization problem given by satisfies for all . On the other hand, if no set cover of size exists, then for any solution to the robust optimization problem there must exist some element such that for every , and such that for all . This implies that , and hence . We have therefore reduced the set cover problem to distinguishing cases where from cases where . We conclude that it is NPhard to find any for which , for any positive . ∎
Appendix D Strengthening the Benchmark
We now observe that our construction actually competes with a stronger benchmark than . In particular, one that allows for distributions over solutions:
(31) 
Hence, our assumption is that there exists a distribution over solutions such that for any realization of the objective function, the expected value of the objective under this distribution over solutions is at least .
Now we ask: given an oracle for the distributional problem, can we find a solution for the robust problem that achieve minimum reward at least . We show that this is possible:
Theorem 11.
Given access to an approximate Bayesian oracle, we can compute a distribution over solutions, defined as a uniform distribution over a set , such that:
(32) 
Proof.
Observe that a distributional oracle for the setting with solution space and functions is also a distributional oracle for the setting with solution space and functions , where for any : . Moreover, observe that is exactly equal to for the setting with solution space and function space . Thus applying Theorem 1 to that setting we get an algorithm which computes a distribution over distributions of solutions in , that satisfies:
Comments
There are no comments yet.