1 Nomenclature
=  Data set of inputs and observed function values at those inputs 
=  Expected value of random variable 
=  Expected Improvement 
=  Objective function of interest 
=  Gaussian process 
=  Matrix of pairwise kernel function evaluations 
=  Kernel function of a process 
=  Mean function of a process 
=  Multivariate Gaussian distribution 
=  Multivariate Student’sT distribution 
=  Marginal probability of at an input 
=  Marginal probability of at an input given data 
=  Student’sT process 
=  Multivariate Student’sT distribution 
=  Input to the objective function 
=  Observed input to the objective function (with corresponding ) 
=  Output from the objective 
=  Global optimum of a function 
=  Best current known function value 
=  Observed output from the objective 
=  Mean parameter of the distribution 
=  Degrees of freedom parameter for the Student’sT distribution/process 
=  Shape parameter of the distribution 
=  Marginal shape parameter at a given input 
=  Cumulative distribution function of a standard Gaussian distribution 
=  Cumulative distribution function of a standard Student’sT distribution 
=  Probability density function of a standard Gaussian distribution 
=  Probability density function of a standard Student’sT distribution 
=  Constraint violation 
2 Introduction
One major challenge in aerospace design optimization is the computational expense in determining the quantity of interest. Even an Euler flow solution around a full aircraft geometry is still a nontrivial endeavor, let alone computing the aerodynamic forces with higherfidelity governing equations like RANS and DES. Nonetheless, these higher fidelity simulations are being increasingly relied upon by the design community, a trend which will only increase as researchers work to reduce the current deficiencies in computational fluid dynamics (CFD), such as turbulent transition, separation, and engine stall analysis [1]. There is accordingly a premium on sampling (datageneration) efficiency when designing optimization algorithms when the objective function is such a computationally expensive simulation.
Bayesian optimization is a class of techniques with several advantages in finding global optima of objective functions that are computationally expensive to sample (e.g., highfidelity simulators). Perhaps their primary advantage is that they need relatively few samples to find global minima of a function, in contrast to local hilldescending methods such as the NelderMead simplex [2] or quasiNewton methods [3]
that only find local minima, and in contrast to population or samplingbased global methods such as genetic algorithms
[4] or simulated annealing [5] that typically require huge sample sets (e.g. hundreds or thousands of simulations in an aerospace design context). Bayesian optimization techniques work by defining a relative probability of different function behaviors in the form of a probabilistic prior over the space of functions. As the objective function is repeatedly evaluated, this prior distribution can be updated using Bayes rule to get an updated belief, i.e. posterior, about the function behavior at unobserved inputs. This Bayesian update is often computationally expensive relative to other classes of optimization algorithms, but this added expense is often negligible relative to (for example) the sampling cost of evaluating a flow simulation.The most common prior used in Bayesian optimization is a Gaussian process (GP). A Gaussian process assumes that the marginal joint distribution of function values at any (finite) set of input locations is a multivariate Gaussian distribution (MVG). Gaussian processes have several desirable mathematical properties. In particular, the Bayesian update step is analytic, as is finding the marginal distributions for the function behavior at unknown locations. Gaussian processes have been widely used in global optimization, from the important EGO algorithm
[6], to supersonic jet optimization [7], to multifidelity modeling and optimization [8].In spite of their success, GPs have several known shortcomings. First, the Gaussian distribution is not a “heavytailed” distribution, and so the Bayesian posterior is forced to assign low probability to extreme outliers — regardless of the data. Second, the posterior variance of a Gaussian process does not depend on the returned objective values, but depends only
on the input evaluation locations. This implies, for instance, that the posterior variance is not higher if the objective values in the sample set are all much different than expected under the Gaussian prior. One way to deal with this is to define a hyperprior over GP parameters (such as the observation noise), but then evaluating the posterior is often not a simple update equation, instead requiring approximate inference algorithms such as Markov chain Monte Carlo.
In this paper, we argue for the use of a different probabilistic prior — a Student’sT process (STP). Student’sT processes assume that the function values at any (finite) set of input locations are jointly distributed according to a multivariate Student’sT distribution (MVT), unlike the MVG of a Gaussian process. Like a GP, there is an analytic formula for updating an STP with new samples, and it is easy to find the marginal distribution for unknown locations. Additionally, the Student’sT distribution includes an additional degrees of freedom parameter controlling the kurtosis of the distribution. This means outliers can be much more likely under an STP than a GP, In addition, as will be shown, the posterior variance is increased if observed values vary more than expected under the corresponding Gaussian process, and the posterior variance is decreased if they vary by less than expected. As the degrees of freedom parameter approaches infinity the STP converges to a GP, and so STPs are a generalization of GPs. These properties make STPs wellsuited for aerospace optimization problems. Aerospace design problems often feature smooth regions punctuated by (near) discontinuities, for example the transition to stall, or the failure to meet constraints. These discontinuities would be considered extremely unlikely according to a Gaussian prior, but not so under a Student’sT prior.
The paper is organized as follows. We begin by reviewing Gaussian processes. We then introduce the Student’sT process, comparing and contrasting it with GPs. We then derive formulae useful for Bayesian optimization, such as expected improvement and marginal likelihood. Finally, we compare the performance of GPs with STPs on benchmark optimization problems and an aerostructural design optimization problem. We find that the Student’sT processes significantly outperform their Gaussian counterparts. In the future, this work can be extended to more complex design examples, for example with function gradients and constraints, and to other Bayesian search processes.
3 Gaussian Process Review
In this section we briefly review Gaussian processes in a way that facilitates comparison with Student’sT processes, which we describe in the following section. A more complete description of Gaussian processes can be found in [9].
A GP is a “collection of random variables, any finite number of which have a joint Gaussian distribution”[9]. A GP gives a prior over the space of possible objective functions, parameterized by two functions. The first is a mean function, , that sets the prior expected value of the objective function at every input location . The second is a kernel function , that sets the covariance between the values of the objective function at any two input locations and .
We will write the GP prior distribution over objective functions specified by such a set of parameters as
(1) 
At a single input location, , the prior distribution of the objective function is described by a (univariate) Gaussian distribution
(2) 
where , and . At multiple input locations, the joint marginal prior distribution is described by a multivariate Gaussian
(3) 
where , and .
It is often the case that is assumed to be 0, and this will be assumed for the remainder of the work. The kernel function is typically chosen so that the covariance decreases as and become farther apart. A common choice is the “isotropic squared exponential” function (SE)
(4) 
where is a bandwidthhyperparameter that sets the correlation length.
As in standard Bayesian analysis, this prior can be updated every time the objective function is evaluated at a given to produce some value . An important property of the Gaussian distribution is that it is closed under such conditioning on new data, i.e. the posterior distribution after observing a set of samples is still a Gaussian distribution in the other variables. Specifically, it can be shown that given a set of inputs and outputs , the distribution of values of the objective function at unobserved locations is given by
(5)  
(6)  
(7) 
where is the covariance matrix defined by the kernel function between the observed locations in , is the covariance between the observed locations and the unobserved locations, and is the covariance among the unobserved locations. It is notable that the posterior covariance matrix (7) only depends on the observed locations, , and not on the function values themselves .
GPs are often used for function minimization, where the goal is to find an input location which if sampled is likely to have a relatively low value of the objective function. There are many strategies for choosing the next input to sample, such as entropy search [10] or knowledge gradient [11]. The most common strategy, and one of the simplest, is to select the input with the highest expected improvement beyond the current best known value. This means that given a current best function value, , the next to sample is
(8) 
where is computed according to (5), and is the indicator function that returns 1 if its argument is true and 0 otherwise. The expected improvement under a GP has an analytic form
(9) 
where , and and are the cumulative density function (CDF) and probability density (PDF) for a standard Gaussian distribution respectively. This analytic expression for expected improvement makes it relatively efficient to solve the optimization problem in (8).
4 Student’sT Processes
Student’sT processes are an alternate prior for Bayesian optimization. Just as GPs have marginal distributions described by the multivariate Gaussian distribution, STPs have marginal distributions described by the multivariate Student’sT distribution [12]. Student’sT processes receive mention in [9] and have been used occasionally in modeling [13, 14]. However, a renewal of interest began with Shah et. al. [15] in which the authors derive the Student’sT process from a Wishart prior, and show that STPs are the most general elliptic process with analytic density (loosely, elliptic distributions are a class of distributions that are unimodal and where the likelihood decreases as the distance from the mode increases). The process has been further explored for Bayesian optimization [16].
The multivariate Student’sT distribution is a generalization of the multivariate Gaussian distribution with an additional parameter, , describing the degrees of freedom of the distribution. The probability density is given by:
(10) 
where is the dimension of the distribution, is a location parameter, is a symmetric positive definite shape parameter, and is the degrees of freedom. Like in a Gaussian distribution, is the mean (and mode) of the distribution. The shape parameter is not the covariance matrix of the distribution, but is related to it,
(11) 
As the degrees of freedom increases, i.e. , the multivariate Student’sT distribution converges to a multivariate Gaussian distribution with the same mean and shape parameter.
Like the GP, the Student’sT process is parameterized by a mean function and a kernel function. However it has one additional parameter, the degrees of freedom, . We will write the STP prior distribution over objective functions specified by such a set of parameters as
(12) 
As in a GP, the mean function defines the prior expected value at each location, , and the kernel function sets the covariance between values of the objective function at any two locations and . The joint distribution for a finite subset of locations is
(13) 
where
is the vector of means,
, and is the matrix of pairwisekernel evaluations (remembering the covariance is not the same as the shape parameter).The multivariate Student’sT distribution, and by extension the Student’sT process, is closed under conditioning. Specifically, it can be shown [15, 17] that given a set of samples, , the posterior distribution is given by
(14)  
(15)  
(16)  
(17) 
Comparing (6) and (15), we see that the posterior mean for an STP is identical to that of a GP (for the same kernel function). The posterior covariance, however, is different between the two processes. The rightmost term in (16) matches its Gaussian counterpart, but the leading term has no equivalent in a GP. This term has an explicit dependence on , the function outputs, and scales the posterior covariance. If is greater than , then the posterior covariance is larger than its Gaussian counterpart, while if it is less, the posterior covariance is lower.
When do these conditions hold? It can be shown that the squared Mahalanobis distance for Gaussiangenerated samples is distributed according to a distribution with a mean [18]. This means (in expectation) if the output values are actually generated from a Gaussian processes, i.e. . Similarly, this implies that if the observed values vary from each other by about as much as one would expect under a GP, then the posterior covariance under an STP is roughly identical to the covariance of the equivalent GP. On the other hand, if the values vary by significantly more or less than expected, then the posterior uncertainty under the STP is significantly higher or lower. Note that as, , both and the difference between the terms matters less. So the difference between the STP and GP predictions is most prominent for small values of . In the small regime, however, an STP will be more adaptive to the observed samples at the cost of regularization provided by the Gaussian assumption.
The benefits of using an STP do not come with a significantly higher computational cost. The only major computational difference between the two update rules is the additional term in (16). However, for both processes, the dominant cost is computing the Cholesky decomposition of to compute terms involving , which can be computed once and cached. In fact, the expression occurs in both (15) and (16), so if the mean prediction has already been computed, computing the additional scaling factor of the STP only requires computing a dot product, which is .
4.1 Expected Improvement
Under an STP, the marginal distribution for the output at a single (unobserved) input is a Student’sT distribution, as discussed above:
(18) 
Below we show that the Student’sT distribution also has an analytic expression for the expected improvement over a given . This implies that selecting the next design location using expected improvement is just as easy under an STP as under a GP.
To begin, for simplicity, define
(19) 
The expected improvement equation becomes
(20) 
First, substitute , .
(21)  
(22) 
The integrand of the first integral on the RHS is the standard Student’sT distribution (i.e., a Student’sT distribution with and ). So this integral is just the CDF of the standard T distribution. Next, make the substitution , and in the second integral on the RHS make the substitution , so .
(23)  
(24)  
(25)  
(26) 
The formula (26) is the expected improvement over for a Student’sT distributed random variable, where again and and are the CDF and PDF of the standard Student’sT distribution respectively (). As expected, this converges to (9) as . When
is small, however, the expected improvement can be significantly different from a Gaussian, even when both distributions have the same mean and standard deviation. A smaller
increases the likelihood of outliers, and in comparison to EI under a GP, more importance is placed on having a large uncertainty than on having a promising mean. The best value of will depend on how to the objective function is likely to behave. If large deviations and outliers are actually expected, then small values of will correctly encourage exploration near existing samples, while if these deviations are unlikely, then a GP (or a larger value of ) will be better suited for modeling, as it will encourage more exploration.4.2 Illustrative Comparison
We illustrate the difference between the GP and STP processes. First, we compare draws from the prior distribution (before the function has been sampled anywhere). We set both processes to have a zero mean function (), and to use the isotropic squared exponential kernel function, (4). The STP is set with . Fig. 1 compares realizations of these two processes. The dark green line is the mean of the process at each (1D) input, and the lighter green lines show the mean plus or minus two standard deviations. The lighter gray lines depict realizations of the particular process, with Fig. 0(a) representing draws from the GP, and Fig. 0(b) showing realizations from the STP. Despite the fact that the two processes have the same mean and covariance, outliers are significantly more likely under the STP prior. The draws from the Gaussian process are mostly contained within the standarddeviation error lines, and the most extreme deviations are not much outside it. The STP, on the other hand, has many more outliers, and there are several draws that far exceed the bounds, which would be very unlikely under a Gaussian prior.
We next compare the processes after observing data. We update the GP and STP priors using the same set of input/output data (synthetically created for demonstration purposes) using the formulae in (5) and (14) respectively. The results of this update are shown in Fig. 2, with Fig. 1(a) showing the posterior of the GP, and Fig. 1(b) showing the posterior of the STP. We first observe that the STP is more likely to generate large outliers, just as under the prior. The GP only has significant outliers near the edge of the domain, while the STP has significant outliers not only on the edge of the domain, but also within the interior. Second, we see that the posterior uncertainty is generally larger for the STP than under the GP for this set of samples. These effects combine to make it much more likely to see a large outlier close to existing samples under an STP than a GP. As a result, if expected improvement is used to choose the next input to evaluate, an STP is more likely to choose an input close to an existing evaluation, while a GP is more likely to choose a location far from existing samples.
4.3 Marginal likelihood
It is often the case in Bayesian optimization that the kernel function contains hyperparameters that need to be set, for example the bandwidth in (4). One common approach to set those hyperparameters is to maximize the logarithm of the marginal likelihood of the data. The marginal likelihood of the STP can be found analytically from the data
(27) 
where is the log of the function, and again, . The value of , and so this marginal likelihood, depends on the hyperparameters of the kernel function. So under this maximal marginal likelihood approach, those hyperparameters are chosen to maximize (27). This procedure could also be used to set , though care should be taken to constrain . The Student’sT distribution only has finite variance with , and finite kurtosis when . While outliers are likely in aerospace, typically our uncertainty is not so large as to have infinite kurtosis, and this knowledge should be encoded into (constraints on) the prior.
5 Numerical Experiments
We now compare Student’sT processes with Gaussian processes on problems in Bayesian optimization. We first compare their performances on synthetic benchmark functions commonly used in optimization, and then we compare them on an aerostructural optimization testbed problem. Rather than set via marginal likelihood (as discussed above), we instead use two different STPs, each with a fixed value of , to better compare the processes.
5.1 Synthetic functions
The first synthetic function is the Rosenbrock function [19], given by
(28) 
The Rosenbrock function has a single local minimum of at . The second test function is the sixhump camel function [19], given by
(29) 
The sixhump camel function has six local minima. Two of these minima are global minima, which have a value of at and .
We compare the performance of a Gaussian Process, a Student’sT process with , and a Student’sT process with , representing the small regime (though still with finite kurtosis) and the medium regime respectively. All processes were set to have a mean function of 0, and to use the squared isotropic kernel function. The Bayesian optimization procedure was carried out identically for all of the processes. First, an initial set of 20 input locations were generated using Latin hypercube sampling, and the objective function was evaluated at each location. The input and output data are scaled to have a mean of 0 and a variance of 1 in each dimension. Using this data, the best value of the kernel bandwidth parameter is found using a twostep grid search. Specifically, the marginal likelihood of the data as a function of the log of the bandwidth parameter is computed for evenly spaced locations in . The maximum of this initial grid search is used to refine a second grid search, again using evenly spaced values. Then, the optimization procedure is run for steps. At each step, the next input evaluated is chosen as the one with the greatest expected improvement. This location is found by first searching a grid with locations in each dimension, and by using the grid location as the initial location for a local optimization. At every steps, the evaluated inputs and outputs are renormalized, and a new optimal bandwidth is found using the previously described procedure. The optimization is run until either function evaluations have occurred, or until the global optimum is found to within . The entire optimization procedure was repeated different times (with different initial samples) to find the average performance of the processes on the particular optimization problem. Note that for each individual optimization run, all processes begin with the same initial set of function evaluations.
Fig. 4 compares the performance on the Rosenbrock test case for the three processes. The axis shows the log of the difference between the current best found value and the global optimum , with difference capped at a minimum of . The axis shows how the average of this value changes as a function of the optimization step. The shaded region shows the inner quartiles of performance over the runs, and the solid line depicts the median performance. Similarly, Fig. 4 shows the performance on the sixhump camel test case. It can be seen that while all processes find a significantly better value than the initial samples, the both of the STPs significantly outperform the GP. The STPs both find a better final value, and hone in on the optimum earlier in the optimization. Note also that the STPs are significantly more robust. Nearly all of the runs of the STP runs find the global minimum within the allotted budget, while in many cases the GP fails to find the best input. This is especially true for the sixhump camel function which contains many local minima, exactly the case where Bayesian optimization is most appropriate. The STP with seems to outperform the STP with , though the differences are more minor compared with the differences with the GP.
5.2 Aerostructural optimization
The last test case is to find the optimal wing design for a coupled aerostructural problem, shown in Fig. 6. The coupled solver code is the OpenAeroStruct package [20], which is implemented on top of the OpenMDAO framework [21]. This test case has a 7dimensional input, a univariate output objective, and two nonlinear constraints. The finput parameters are a) the angle of attack of the wing, constrained to be in degrees b) three wing thickness parameters constrained to be in c) three wing twist parameters, constrained to be in degrees. The objective is to minimize the fuel burn of a simulated mission, and the the wing twist and thickness are constrained so that a) the design is physically realizable (the structure does not intersect with itself) and b) the aerodynamic forces do not cause the wing to break. We transform this constrained optimization problem into the following unconstrained problem for demonstration purposes. If the constraints are not violated, the objective is simply taken as the fuel burn. If the constraints are violated, then then the objective is taken to be , where is the constraint violation (the value of was chosen to be larger than the fuel burn of a valid design) ^{3}^{3}3For some input conditions, the design constraints are satisfied, but a negative fuel burn is returned. This is clearly nonphysical, and in this case the objective value is taken to be . There are a small number of inputs for which the simulation crashes, and in this case the objective is taken to be .. The optimization is performed almost identically as described above for the synthetic functions, except in a dimensional input space a full grid search for the best EI is too computationally intensive. Instead, test candidates are generated from Latinhypercube sampling, and the best of those locations is used as the input to the local optimization step.
Fig. 6 compares the performance of the GP against the two STPs for the aerostructural design problem. Here, the plot shows the log of the current best value (rather than the normalized best value) since the global optimum is unknown. While the difference isn’t as stark as the analytic cases, it is still clear that the STPs outperform the GP on this test case. The median performance is consistently better as the optimization progresses, and as the optimization continues, the percentile of the STPs is close to outperforming the percentile of the GP. The difference between the performance between the two STPs is insignificant for this problem.
6 Conclusion
In this paper we began by presenting the Student’sT process, a prior over functions based on the multivariate Student’sT distribution. The STP has similar desirable properties to those of a Gaussian process, in that it has a simple expression for marginal distributions, and it has an analytic Bayesian update rule when new samples are observed. The STP also has some significant advantages over a Gaussian process. First, outliers are likely to occur under an STP with a small value of the degrees of freedom parameter. Second, the posterior covariance adjusts depending on the actual function values observed (and not just their locations), increasing if the samples vary by more than expected, or decreasing if they vary by less than expected. We then presented an analytic expression for the expected improvement under a Student’sT distribution, and showed how to set kernel hyperparameters using marginal likelihood. Finally, we presented numerical simulation results that show STPs outperform GPs on several synthetic benchmarks as well as an aerostructural design optimization problem.
It is known that STPs cannot be better than GPs for every possible design problem [22]. However, it seems that the STP prior may be more naturally suited to problems in aerospace optimization, which do often have large outliers and the need for adjusted variance. With these advantages, and no obvious disadvantages, it seems natural to “upgrade” from Gaussian processes to Student’sT processes in many aerospace design applications. Future work remains to extend Student’sT processes to other important ideas from Bayesian optimization, such as extending Student’sT processes to constrained optimization and multifidelity optimization, and also to other search strategies, such as the Knowledge Gradient algorithm.
7 Acknowledgements
This work was made possible through the support of AFOSR MURI on multiinformation sources of multiphysics systems under Award Number FA95501510038. We would also like to thank the Santa Fe Institute for support of this research.
References
 Slotnick et al. [2014] Slotnick, J., Khodadoust, A., Alonso, J., Darmofal, D., Gropp, W., Lurie, E., and Mavriplis, D., “CFD vision 2030 study: a path to revolutionary computational aerosciences,” 2014.
 Nelder and Mead [1965] Nelder, J. A., and Mead, R., “A simplex method for function minimization,” The computer journal, Vol. 7, No. 4, 1965, pp. 308–313.
 Nocedal and Wright [2006] Nocedal, J., and Wright, S., Numerical optimization, Springer Science & Business Media, 2006.
 Deb et al. [2000] Deb, K., Agrawal, S., Pratap, A., and Meyarivan, T., “A fast elitist nondominated sorting genetic algorithm for multiobjective optimization: NSGAII,” International Conference on Parallel Problem Solving From Nature, Springer, 2000, pp. 849–858.
 Van Laarhoven and Aarts [1987] Van Laarhoven, P. J., and Aarts, E. H., “Simulated annealing,” Simulated Annealing: Theory and Applications, Springer, 1987, pp. 7–15.
 Jones et al. [1998] Jones, D. R., Schonlau, M., and Welch, W. J., “Efficient global optimization of expensive blackbox functions,” Journal of Global optimization, Vol. 13, No. 4, 1998, pp. 455–492.
 Lukaczyk et al. [2013] Lukaczyk, T., Taylor, T., Palacios, F., and Alonso, J., “Managing gradient inaccuracies while enhancing optimal shape design methods,” 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, 2013, p. 1042.
 Lam et al. [2015] Lam, R., Allaire, D. L., and Willcox, K. E., “Multifidelity optimization using statistical surrogate modeling for nonhierarchical information sources,” 56th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2015, p. 0143.

Rasmussen [2006]
Rasmussen, C. E., “Gaussian processes for machine learning,” 2006.
 Hennig and Schuler [2012] Hennig, P., and Schuler, C. J., “Entropy search for informationefficient global optimization,” Journal of Machine Learning Research, Vol. 13, No. Jun, 2012, pp. 1809–1837.
 Frazier et al. [2009] Frazier, P., Powell, W., and Dayanik, S., “The knowledgegradient policy for correlated normal beliefs,” INFORMS journal on Computing, Vol. 21, No. 4, 2009, pp. 599–613.
 Genz and Bretz [2009] Genz, A., and Bretz, F., Computation of multivariate normal and t probabilities, Vol. 195, Springer Science & Business Media, 2009.
 Yu et al. [2007] Yu, S., Tresp, V., and Yu, K., “Robust multitask learning with tprocesses,” Proceedings of the 24th international conference on Machine learning, ACM, 2007, pp. 1103–1110.
 Archambeau and Bach [2011] Archambeau, C., and Bach, F., “Multiple Gaussian process models,” arXiv preprint arXiv:1110.5238, 2011.
 Shah et al. [2014] Shah, A., Wilson, A. G., and Ghahramani, Z., “Studentt Processes as Alternatives to Gaussian Processes.” AISTATS, 2014, pp. 877–885.
 Shah et al. [2013] Shah, A., Wilson, A. G., and Ghahramani, Z., “Bayesian Optimization using Studentt Processes,” NIPS Workshop on Bayesian Optimisation, 2013.
 Roth [2012] Roth, M., On the multivariate t distribution, Linköping University Electronic Press, 2012.
 Slotani [1964] Slotani, M., “Tolerance regions for a multivariate normal population,” Annals of the Institute of Statistical Mathematics, Vol. 16, No. 1, 1964, pp. 135–153.
 Molga and Smutnicki [2005] Molga, M., and Smutnicki, C., “Test functions for optimization needs,” Test functions for optimization needs, 2005, p. 101.
 Jasa et al. [2018] Jasa, J. P., Hwang, J. T., and Martins, J. R. R. A., “Opensource coupled aerostructural optimization using Python,” Structural and Multidisciplinary Optimization, 2018. (Submitted). Code retrieved from http://github.com/mdolab/OpenAeroStruct , last commit Nov 8, 2017.
 Gray et al. [2010] Gray, J., Moore, K. T., and Naylor, B. A., “OpenMDAO: An open source framework for multidisciplinary analysis and optimization,” AIAA/ISSMO Multidisciplinary Analysis Optimization Conference Proceedings, Vol. 5, 2010.

Wolpert and Macready [1997]
Wolpert, D. H., and Macready, W. G., “No free lunch theorems for
optimization,”
IEEE transactions on evolutionary computation
, Vol. 1, No. 1, 1997, pp. 67–82.
Comments
There are no comments yet.