A Distributionally Robust Optimization Approach to the NASA Langley Uncertainty Quantification Challenge

06/28/2020 ∙ by Yuanlu Bai, et al. ∙ University of Michigan Columbia University 0

We study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge problem, based on an integration of robust optimization, more specifically a recent line of research known as distributionally robust optimization, and importance sampling in Monte Carlo simulation. The main computation machinery in this integrated methodology boils down to solving sampled linear programs. We will illustrate both our numerical performances and theoretical statistical guarantees obtained via connections to nonparametric hypothesis testing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Overview of Our Methodology (Problem A)

We first give a high-level overview of our methodology in extracting a region that contains the true epistemic variables. For convenience, we call this region an “eligibility set” of . For each value of inside

, we also have a set (in the space of probability distributions) that contains “eligible” distributions for the random variable

. For the sake of computational tractability (as we will see shortly), the eligibility set of is represented by a set of sampled points in that approximate its shape, whereas the eligibility set of is represented by probability weights on sampled points on . The eligibility set and the corresponding eligibility set of distributions for are obtained by solving an array of LPs that are constructed from the properly sampled points, and then deciding eligibility by checking the LP optimal values against a threshold that resembles the “-value” approach in hypothesis testing. This methodology involves a dimension-collapsing transformation , applied on the raw data, which ultimately allows using the Kolgomorov-Smirnov (KS) statistic to endow rigorous statistical guarantees. Algorithm 1 is a procedural description of our approach to construct the eligibility set , and it also gives as a side product an eligibility set of the distributions of for each , represented by weights in the set (11). In the following, we explain the elements and terminologies in this algorithm in detail.

Input: Data . A uniformly sampled set of over . A uniformly sampled set of over . A summary function . A target confidence level .

Procedure:

1. Simulate outputs from the baseline distribution: Evaluate for , .
2. Summarize the outputs: Evaluate for , and for , .
3. Compute the degree of eligibility: For each , solve optimization problem Eq. (10) to obtain .
4. Construct the eligibility set: Output . Smooth the set if needed.
Algorithm 1 Constructing eligibility set

2 A DRO Perspective

Our starting idea is to approximate the set

(1)

where is the probability distribution of , namely the outputs of the simulation model at a fixed but random . denotes the empirical distribution of , more concretely the distribution given by

where denotes the Dirac measure at . denotes a discrepancy between two probability distributions, and is a suitable constant. Intuitively, in Eq. (1) is the set of such that there exists a distribution for the outputs that is close enough to the empirical distribution from the data. If for a given there does not exist any possible output distribution that is close to , then is likely not the truth. The following gives a theoretical justification for using Eq. (1): Suppose that the true distribution of the output , called , satisfies with confidence level , i.e., we have

(2)

where denotes the probability with respect to the data. Then the set in Eq. (1) satisfies , where denotes the true value of . Similar deduction holds if Eq. (2) holds asymptotically (as the data size grows), in which case the same asymptotic modification holds for the conclusion. The proof of Theorem 2 is straightforward. Note that implies . Thus we have . Similar derivation holds for the asymptotic version.

In Eq. (1), the set of distributions is analogous to the so-called uncertainty set or ambiguity set in the RO literature (e.g., Bertsimas et al. (2011); Ben-Tal and Nemirovski (2002)), which is a set postulated to contain the true values of uncertain parameters in a model. RO generally advocates decision-making under uncertainty that hedges against the worst-case scenario, where the worst case is over the uncertainty set (and thus often leads to a minimax optimization problem). DRO, in particular, focuses on problems where the uncertainty is on the probability distribution of an underlying random variable (e.g., Wiesemann et al. (2014); Delage and Ye (2010)). This is the perspective that we are taking here, where has a distribution that is unknown, in addition to the uncertainty on . Moreover, we also take a generalized view of RO or DRO here as attempting to construct an eligibility set of instead of finding a robust decision via a minimax optimization.

Theorem 2 focuses on the situation where the uncertainty set is constructed and calibrated from data, which is known as data-driven RO or DRO (Bertsimas et al. (2018a); Hong et al. (2017)). If such an uncertainty set has the property of being a confidence region for the uncertain parameters or distributions, then by solving RO or DRO, the confidence guarantee can be translated to the resulting decision, or the eligibility set in our case. Here we have taken a nonparametric and frequentist approach, as opposed to other potential Bayesian methods.

In implementation we choose , so that the eligibility set has the interpretation of approximating a confidence set for . In the above developments, can in fact be replaced with more general set where is calibrated from the data. Nonetheless, the distance-based set (or “ball”) surrounding the empirical distribution is intuitive to understand, and our specific choice of the set below falls into such a representation.

To use Eq. (1), there are two immediate questions:

  1. What should and can we use, and how do we calibrate ?

  2. How do we determine whether there exists that satisfies for a given ?

For the first question, we first point out that in theory many choices of could be used (basically, any that satisfies the confidence property in Theorem 2). But, a poor choice of would lead to a more conservative result, i.e., larger , than others. A natural choice of should capture the discrepancy of the distributions efficiently. Moreover, the choice of should also account for the difficulty in calibrating such that the assumption in Theorem 2 can be satisfied, as well as the computational tractability in solving the eligibility determination problem in Eq. (1).

Based on the above considerations, we construct and calibrate as follows. First, we “summarize” the data into a lower-dimensional representation, say , where for some function . For convenience, we denote , and . We call the “summary function” and the “summaries” of the -th output. attempts to capture important characteristics of the raw data (we will see later that we use the positions and values of the peaks extracted from Fourier analysis). Also, the low dimensionality of is important to calibrate well.

Next, we define

(3)

where , with denoting the indicator function, is the empirical distribution function of (i.e., the distribution function of projected onto the -th summary). is the probability distribution function of the -th summary of the simulation model output (i.e., the distribution function of the projection of onto the -th summary). We then choose as the

-quantile of the Kolmogorov-Smirnov (KS) statistic, namely that

is the -quantile of where denotes a standard Brownian bridge.

To understand Eq. (3), note that the set of that satisfies is equivalent to that satisfies

(4)

Here, is the KS-statistic for a goodness-of-fit test against the distribution , using the data on the -th summary. Since we have summaries and hence tests, we use a Bonferroni correction and deduce that

where denotes the true distribution function of the -th summary. Thus, the (asymptotic version of the) assumption in Theorem 2 holds. Note that here the quality of the summaries does not affect the statistical correctness of our method (in terms of overfitting), but it does affect crucially the resulting conservativeness (in the sense of getting a larger ). Moreover, in choosing the number of summaries , there is a tradeoff between the conservativeness coming from representativeness and simultaneous estimation. On one end, using more summaries means more knowledge we impose on , which translates into a smaller feasible set for and ultimately a smaller eligible set . This relation, however, is true only if there is no statistical noise coming from the data. In the case of finite data size , then more summaries also means that constructing the feasible set for requires more simultaneous estimations in calibrating its size, which is manifested in the Bonferroni correction whose degree increments with each additional summary. In our implementation (see Section 3), we find that using 12 summaries seems to balance well this representativeness versus simultaneous estimation error tradeoff.

Now we address the second question on how we can decide, for a given , whether a exists such that . We first rephrase the representation with a change of measure. Consider a “baseline” probability distribution, say

, that is chosen by us in advance. A reasonable choice, for instance, is the uniform distribution over

, the support of . Then we can write as

(5)

for where is the Radon-Nikodym derivative of with respect to , and we have used the change-of-measure representation . Here we have assumed that is suitably chosen such that absolute continuity of with respect to holds. Eq. (5) turns the determination of the existence of eligible into the existence of an eligible Radon-Nikodym derivative .

The next step is to utilize Monte Carlo simulation to approximate . More specifically, given , we run simulation runs under to generate for . Then Eq. (5) can be approximated by

(6)

where represents the (unknown) sampled likelihood ratio from the view of importance sampling (Blanchet and Lam (2012); Asmussen and Glynn (2007) Chapter 5). Our task is to find a set of weights, , such that Eq. (6) holds. These weights should approximately satisfy the properties of the Radon-Nikodym derivative, namely positivity and integrating to one. Thus, we seek for such that

(7)
(8)

where Eq. (8) enforces the weights to lie in a probability simplex. If is much larger than , then the existence of satisfying Eq. (7) and Eq. (8) would determine that the considered is in . To summarize, we have: Suppose , and is absolutely continuous with respect to and that for some constant and denotes the essential supremum. Suppose, for each , we generate simulation replications to get , where are drawn from in an i.i.d. fashion. Then the set

will satisfy

Note that in Theorem 2, ’s represent the unknown sampled likelihood ratios such that, together with the ’s generated from , the function approximates the unknown true -th summary distribution function . To use the above and elicit the guarantee in Theorem 2, we still need some steps in order to conduct feasible numerical implementation. First, we need to discretize or sufficiently sample ’s over , since checking the existence of eligible ’s for all is computationally infeasible. In our implementation we draw ’s uniformly over , call them , and then put together the geometry of from the eligible ’s. Second, the current representation of the KS constraint Eq. (7) involves entire distribution functions. We can write Eq. (7) as a finite number of linear constraints, given by

(9)

for where are the -th summary of the -th data point, and and denote the right and left limits of the empirical distribution at .

Thus, putting everything together, we solve, for each , the feasibility problem: Find such that Eq. (9) and Eq. (8) hold. If there exists feasible , then is eligible. The set is an approximation of . Note that this is a “sampled” subset of . In general, without running the simulation at the other points of , there is no guarantee whether these other points are eligible or not. However, if the distribution of is continuous in in some suitable sense, then it is reasonable to believe that the neighborhood of an eligible point is also eligible (and vice versa). In this case, we can “smooth” the discrete set of if needed (e.g., by doing some clustering and taking the convex hull of each cluster). Note that the feasibility problem above is a linear problem in the decision variables ’s.

Lastly, we offer an equivalent approach to the above procedure that allows further flexibility in choosing the threshold , which currently is set as the Bonferroni-adjusted KS critical value. This equivalent approach leaves this choice of threshold open and can determine the set of eligible as a function of the threshold, thus giving some room to improve conservativeness should the formed approximate turns out to be too loose according to other expert opinion. Here, we solve, for each , the optimization problem

(10)

where the decision variables are and . If the optimal value satisfies , then is eligible (This can be seen by checking its equivalence to the feasibility problem via the monotonicity of the feasible region for ’s in Eq. (10) as increases). The rest then follows as above that is an approximation of . Like before, Eq. (10) is an LP. Moreover, here captures in a sense the “degree of eligibility” of , and allows convenient visualization by plotting against to assess the geometry of . For these reasons we prefer to use Eq. (10) over the feasibility problem before. These give the full procedure in Algorithm 1.

Finally, we also present how to find eligible distributions of for an eligible . The set of eligible distributions of is approximated by the weights ’s that satisfy Eq. (9) and Eq. (8), namely

(11)

where is the probability weight on . From this, one could also obtain approximate bounds for quantities related to the distribution of . For instance, to get approximate bounds for the mean of , we can maximize and minimize subject to constraint (11).

To close this section, we discuss some related literature to our methodology that is not yet mentioned. In our development, we have constructed an uncertainty set for the unknown distribution via a confidence region associated with the KS goodness-of-fit test. This uncertainty set has been proposed in Bertsimas et al. (2018b), and other distance-based uncertainty sets, including -divergence (Ben-Tal et al. (2013)) and Wasserstein distance (Esfahani and Kuhn (2018)), have also been used. We use a simultaneous group of KS statistics with Bonferroni correction, motivated by the tractability in the resulting integration with the importance weighting. The closest work to our framework is the stochastic simulation inverse calibration problem studied in Goeva et al. (2019), but they consider single-dimensional output and parameter to calibrate the input distributions, in contrast to our “summary” approach via Fourier analysis and the multi-dimensional settings we face. Finally, we point out that the use of simulation and importance sampling in robust optimization has also been studied in risk quantification in operations research and mathematical finance (e.g., Glasserman and Xu (2014); Ghosh and Lam (2019); Lam (2016)).

In the remainder of this paper, we will illustrate the use of our methodology and report briefly our numerical results for the UQ Challenge.

3 Summarizing Discrete-Time Histories using Fourier Transform

By observing the plot of the outputs

, we judge that these time series are highly seasonal. Naturally, we choose to use Fourier transform to summarize

, and we may write in the form

First we try to apply Fourier transform to . For each , we compute the ’s. Fig. 1 shows the real part and the imaginary part of ’s against the corresponding frequencies.

(a) Real part
(b) Imaginary part
Figure 1: The real part and the imaginary part pf ’s against the corresponding frequencies

For the real part, we see that there is a large positive peak, a large negative peak, a small positive peak and a small negative peak. After testing, we confirm that for any , the large peaks lie in the first 14 terms (from 0Hz to 1.59Hz), while the small peaks lie between the 15th term and the 50th term (from 1.71Hz to 5.98Hz). For the imaginary part, we see that there is a large negative peak and a small positive peak. The large peak is also located in the first 14 terms and the small peak between the 15th term and the 50th one.

Therefore, we choose to use the following method to summarize (i.e., construct the function ): first, we apply the Fourier transform to compute ’s and the corresponding frequencies; second, we compute the real part and the imaginary part of ’s; third, for the real part, we find the maximum value and the minimum value over and , as well as their corresponding frequencies; fourth, for the imaginary part, we find the minimum value over and the maximum value over as well as their corresponding frequencies. Then we use these 12 parameters as the summaries of .

To illustrate how well these summaries fit , Fig. 2 shows the comparison for . The fit qualities of other time series are similar to this example. Though they may not be extremely close to each other, the fitted curves do resemble the original curves. Note that it is entirely possible to improve the fitting if we keep more frequencies even if they are not as significant as the main peaks. On the other hand, as discussed in Section 2, using a larger number of summaries both represents more knowledge of (better fitting) but also leads to more simultaneous estimation error when using the Bonferroni correction needed in calibrating the set for . To balance the conservativeness of our approach coming from representativeness versus simultaneous estimation, we choose to use the 12-parameter summaries depicted before.


Figure 2: Fitting with the 12 parameters

4 Uncertainty Reduction (Problem B)

4.1 Ranking Epistemic Parameters (B.1 and B.2)

Now we implement Algorithm 1 with and the summary function defined in the previous section. The dimension of the summary function is . We choose to be 0.05. Thus, following the algorithm, for each , we compute and then compare it with .

In Fig. 3, we plot the ’s against each dimension of . The red horizontal lines in the graphs correspond to . Thus the dots below the red lines constitute the eligible ’s. We rank the epistemic parameters according to these graphs, namely we rank higher the parameter whose range can potentially be reduced the most. Note that this ranking scheme can be summarized using more rigorous metrics related to the expected amount of eligible ’s after range shrinkage, but since there are only four dimensions, using the graphs directly seem sufficient for our purpose here.

We find that the values of and of the eligible ’s broadly range from 0 to 2, which implies that reducing the ranges of these two dimensions could hardly reduce our uncertainty. By contrast, the values of and of the eligible ’s are both concentrated in the lower part of . Thus, our ranking of the epistemic parameters according to their ability to improve the predictive ability is .

Chances are that the true values of and are relatively small. In order to further pinpoint the true values of and , we choose to make two uncertainty reductions: increase the lower limits of the bounding interval of and .

(a)
(b)
(c)
(d)
Figure 3: against each epistemic variable

4.2 Impact of the value of (a.2)

To investigate the impact of the value of , for different values of we randomly sample outputs without replacement. Then we take these outputs as the new data set. By repeating implementing Alg. 1, we find that the larger is , the smaller is the proportion of eligible ’s. It is intuitive that as the data size grows, can be better pinpointed. Moreover, except for , the range of each epistemic variable of eligible ’s obviously shrinks as increases, which further confirms that is the least important epistemic variable.

4.3 Updated Parameter Ranking (B.3)

After the epistemic space is reduced, we repeat the process in Section 4.1 but now ’s are generated uniformly from . From the associated scatter plots (not shown here due to space limit), the updated ranking of the epistemic parameters is .

5 Reliability of Baseline Design (Problem C)

5.1 Failure Probabilities and Severity (C.1, C.2 and C.5)

Combining the refined range of provided by the host with our Algorithm 1, we construct . To estimate , we run simulations to respectively solve

(12)

where is the set of in Eq. (11). These give the range of . We use the same method to approximate , the failure probability for any requirement. Note that in our implementation the in the formulations above is represented by discrete points ’s. As discussed previously, under additional smoothness assumptions, we could “smooth” these points to obtain a continuum. Nonetheless, under sufficient sampling of , the discretized set should be a good enough approximation in the sense that the optimal values from the “discretized” problems are close to those using the continuum.

Using the above method, we get that the ranges of , , and are approximately , , and . Though the ranges seem to be quite wide, they can provide us useful information to be utilized next.

To evaluate , the severity of each individual requirement violation, similarly we simulate The results for , and are respectively 0.1464, 0.0493 and 3.5989. Clearly the violation of is the most severe one while the violation of is the least.

5.2 Rank for Uncertainties (C.3)

Our analysis on the rank for epistemic uncertainties is based on the range of obtained above. In our computation, we obtain for each eligible . For simplicity, we use and to denote these two values for each eligible respectively.

Our approach is to scrutinize the plots of and against each epistemic variable (not shown here due to space limit). For , large value is notable, since it means that any distribution that provides similarity to the original data is going to fail with large probability. Therefore the most ideal reduction is to avoid the region of such that all ’s are large. For , the largest for the region denotes the maximum failure probability that one can have. So we pay attention to the epistemic variables that could potentially reduce the “worst-case” failure probability. Based on these considerations, we conclude that the rank for epistemic uncertainties is .

6 Reliability-Based Design (Problem D)

To find a reliability-optimal design point , we minimize

(13)

Here is the reason why we choose this function as the objective. For an eligible , if is large, then the true probability in which the system fails must be even larger than this “best-case” estimate, which implies that this point has a considerable failure likelihood. The objective above thus aims to find a design point to minimize this best-case estimate, but taking the worst-case among all the eligible ’s. Arguably, one can use other criteria such as minimizing , but this could make our procedure more conservative.

The optimization problem (13) is of a “black-box” nature since the function is only observed through simulation, and the problem is easily non-convex. Our approach is to use a gradient descent to guide us towards a better , with a goal of finding a reasonably good (instead of insisting on full optimality which could be difficult to achieve in this problem). Note that we need to sample when we land at a new

during our iterations, and hence our approach takes the form of a stochastic gradient descent or stochastic approximation. Moreover, the gradient cannot be estimated in an unbiased fashion as we only have black-box function evaluation, and thus we need to resort to the use of finite-difference. This results in a zeroth-order or the so-called Kiefer-Wolfowitz (KW) algorithm. As we have a nine-dimensional design variable, we choose to update

via a coordinate descent, namely at each iteration we choose one of the dimensions and run a central finite-difference along that dimension, followed by a movement of guided by this gradient estimate with a suitable step size. The updates are done in a round-about fashion over the dimensions. The perturbation size in the finite-difference is chosen of order here as it appears to perform well empirically (though theoretically other scaling could be better).

Input: The baseline design point . The initial step size . The initial perturbation size . The max iteration . The objective function .

Procedure:

Set and .
while  do
     Set and .
     for  from 1 to 9 do
         .
         .
         .
         .
     end for
     .
end while
Output .
( denotes the Hadamard product).
Algorithm 2 KW algorithm to find

Algorithm 2 shows the details of our optimization procedure. Considering that the components of are of very different magnitudes, we first perform a normalization to ease this difference. The quantity encodes the position of the normalized , and

denotes a nine-dimensional vector of

’s that is set as the initial normalized design point. We set and .

After running the algorithm, we arrive at a new design point. Compared with the baseline design, the objective function decreases from 0.3656 to 0.2732. Note that this means that the best-case estimate of the failure probability, among the worst possible of all eligible ’s, is 0.2732.

For , the ranges of , , and (defined in Section 5.1) are approximately , , and . We could observe from the plots of and that has significant different patterns on high values in both plots. According to the trends shown in the plots, we rank the epistemic variables as .

7 Design Tuning (Problem E)

With data sequence for , we may incorporate the additional information to update our model as before, and we determine to refine . The final design is obtained using Algorithm 2 with this updated information.

References

  • S. Asmussen and P. W. Glynn (2007) Stochastic simulation: algorithms and analysis. Vol. 57, Springer Science & Business Media. Cited by: §2.
  • A. Ben-Tal, D. Den Hertog, A. De Waegenaere, B. Melenberg, and G. Rennen (2013) Robust solutions of optimization problems affected by uncertain probabilities. Management Science 59 (2), pp. 341–357. Cited by: §2.
  • A. Ben-Tal and A. Nemirovski (2002) Robust optimization–methodology and applications. Mathematical Programming 92 (3), pp. 453–480. Cited by: §2.
  • D. Bertsimas, D. B. Brown, and C. Caramanis (2011) Theory and applications of robust optimization. SIAM Review 53 (3), pp. 464–501. Cited by: §2.
  • D. Bertsimas, V. Gupta, and N. Kallus (2018a) Data-driven robust optimization. Mathematical Programming 167 (2), pp. 235–292. Cited by: §2.
  • D. Bertsimas, V. Gupta, and N. Kallus (2018b) Robust sample average approximation. Mathematical Programming 171 (1-2), pp. 217–282. Cited by: §2.
  • J. Blanchet and H. Lam (2012) State-dependent importance sampling for rare-event simulation: an overview and recent advances. Surveys in Operations Research and Management Science 17 (1), pp. 38–59. Cited by: §2.
  • L. Crespo and S. Kenny (2020) The NASA Langley Challenge on Optimization under Uncertainty. ESREL (), pp. . Cited by: A Distributionally Robust Optimization Approach to the NASA Langley Uncertainty Quantification Challenge.
  • E. Delage and Y. Ye (2010)

    Distributionally robust optimization under moment uncertainty with application to data-driven problems

    .
    Operations research 58 (3), pp. 595–612. Cited by: §2.
  • P. M. Esfahani and D. Kuhn (2018) Data-driven distributionally robust optimization using the wasserstein metric: performance guarantees and tractable reformulations. Mathematical Programming 171 (1-2), pp. 115–166. Cited by: §2.
  • S. Ghosh and H. Lam (2019) Robust analysis in stochastic simulation: computation and performance guarantees. Operations Research 67 (1), pp. 232–249. Cited by: §2.
  • P. Glasserman and X. Xu (2014) Robust risk measurement and model risk. Quantitative Finance 14 (1), pp. 29–58. Cited by: §2.
  • A. Goeva, H. Lam, H. Qian, and B. Zhang (2019) Optimization-based calibration of simulation input models. Operations Research 67 (5), pp. 1362–1382. Cited by: §2.
  • L. J. Hong, Z. Huang, and H. Lam (2017) Learning-based robust optimization: procedures and statistical guarantees. arXiv preprint arXiv:1704.04342. Cited by: §2.
  • H. Lam (2016) Robust sensitivity analysis for stochastic systems. Mathematics of Operations Research 41 (4), pp. 1248–1275. Cited by: §2.
  • W. Wiesemann, D. Kuhn, and M. Sim (2014) Distributionally robust convex optimization. Operations Research 62 (6), pp. 1358–1376. Cited by: §2.