Model uncertainty is a significant source of epistemic uncertainty that affects the prediction of a multidisciplinary system. In order to achieve a reliable design, it is critical to ensure that the disciplinary/subsystem simulation models are trustworthy, so that the aggregated uncertainty of system quantities of interest (QOIs) is acceptable. Reduction of model uncertainty can be achieved by gathering additional experiments and simulations data; however, resource allocation for multidisciplinary design optimization (MDO) and analysis remains a challenging task due to the complex structure of the system, which involves decision makings about where (sampling locations), what (disciplinary responses), and which type (simulations versus experiments) for allocating more resources. Instead of trying to concurrently make the above decisions, which would be generally intractable, we develop a novel approach in this paper to break the decision making into a sequential procedure. First, a multidisciplinary uncertainty analysis (MUA) is developed to identify the input settings with unacceptable amounts of uncertainty with respect to the system QOIs. Next, a multidisciplinary statistical sensitivity analysis (MSSA) is developed to investigate the relative contributions of (functional) disciplinary responses to the uncertainty of system QOIs. The input settings and critical responses to allocate resources are selected based on the results from MUA and MSSA, with the aid of a new correlation analysis derived from spatial-random-process (SRP) modeling concepts, ensuring the sparsity of the selected inputs. Finally, an enhanced preposterior analysis predicts the effectiveness of allocating experimental and/or computational resource to answer the question about which type of resource to allocate. The proposed method is applied to a benchmark electronic packaging problem to demonstrate how epistemic model uncertainty is gradually reduced via resource allocation for data gathering.

Introduction

With the increasing accuracy and complexity of simulation models, the cost of running high-fidelity computer simulations becomes more and more prohibitive in simulation-based design. Accordingly, emulators (also known as metamodels or surrogate models) have been widely used as a replacement of original simulation models and are especially useful in searching for the optimal design. However, due to the lack of simulation/experimental data, the accuracy of emulators can be very poor, which introduces epistemic model uncertainty that, in contrast to the aleatory uncertainty from natural/physical randomness, can be reduced by collecting additional data. We refer to the process of planning additional experiments/simulations for reducing epistemic model uncertainty as a resource allocation problem. Resource allocation is even more important in MDO, which, unlike traditional design optimization problems, is notoriously complicated due to the fact that it requires analyses in multiple disciplines and/or involves a number of subsystems and components. In particular, resource allocation for MDO requires an optimal scheme to distribute resources to different disciplines and to enhance the prediction of system QOIs. However, this is not an easy problem, because the coupling relationship between different disciplines leads to a nested structure and substantial complexity.

Model uncertainty, as a major source of epistemic uncertainty in multidisciplinary systems, stems from either having a limited knowledge of the true reality and/or using a simplified model that does not satisfactorily represents the reality (represented as “model discrepancy” [1]), or from a lack of computer simulation data (termed “code uncertainty” [1] or “interpolation uncertainty” [2]) when replacing the expensive computer model using a surrogate model (metamodel). Even though the sources of these two types of uncertainty are different, they all belong to epistemic model uncertainty due to the lack of data—model discrepancy uncertainty due to the lack of experimental data and interpolation uncertainty due to the lack of computer simulation data. Both sources of epistemic uncertainty can be addressed simultaneously using the Bayesian Gaussian process approach presented by Kennedy and O'Hagan [1]. The epistemic model uncertainty of system QOIs is an aggregation of the uncertainties in the disciplinary subsystems, and the purpose of resource allocation is to reduce the aggregated prediction uncertainty of system QOIs. Nevertheless, it is not straightforward to provide a good resource allocation scheme, due to the amount of decisions involved and their dynamic nature. The decisions to make include:

  1. (1)

    Where in the input space of a multidisciplinary system should we allocate additional resources? (That is, what are the input locations where system QOIs are most influenced by epistemic model uncertainty?)

  2. (2)

    To what disciplinary response(s) shall we allocate more resources?

  3. (3)

    Which type of resources shall we allocate? Is adding more simulations sufficient, or should we conduct more physical experiments?

The first question requires uncertainty quantification of disciplinary subsystems and an efficient way to propagate their impact to the system QOIs. Although there has been extensive research on uncertainty propagation of a single discipline [1,38], or a multidisciplinary system but with only aleatory uncertainty (e.g., using Monte Carlo simulation [911], Taylor series expansion [1215], decomposition strategies [1622], etc.), existing work for multidisciplinary systems with epistemic model uncertainty is scarce. The problem is often oversimplified by either predefining model bias [2327] or assuming a simple form of interdisciplinary couplings (restricted to feed-forward couplings [28], for example). Furthermore, even after developing a convenient uncertainty propagation scheme, one must still decide where to choose the input sample locations so that they sufficiently cover all regions with large epistemic model uncertainty. Consequently, the second question requires an approach to distinguish the relative contributions (which can vary over the input space) of the different disciplinary simulation models on the uncertainty of system QOIs. Statistical sensitivity analysis (SSA) is commonly used in engineering design to gain knowledge of complex model behavior and to facilitate designers' decision making on where to spend engineering effort [29]. However, SSA of model uncertainty, which is different from SSA of random variables in the more traditional literature, has rarely been studied [3032]. The difficulty lies in the fact that uncertain models are essentially functional inputs rather than scalars, and hence, their SSA would be fundamentally different than traditional routines. Allaire et al. [30] developed a pure sampling-based approach, whose applicability is practically limited to single-disciplinary systems and hierarchical systems without strong couplings due to its computational cost. SSA for a fully coupled multidisciplinary system was never studied by other researchers. The third question is perhaps the least studied of the three. Previous works focused on allocating a single type of resource, typically experimental (e.g., Ref. [28]). Indeed, relative to conducting simulations, conducting physical experiments may be a more direct and effective way to learn physical reality, but it is also typically more expensive. For that reason, there exists a tradeoff between most reducing uncertainty (by conducting experiments) versus least increasing computational budget (by conducting simulations).

On top of these three individually challenging questions, the complexity of concurrently answering them adds another layer of difficulty. To simplify the problem, the pioneering work by Sankararaman et al. [28] had to assume no coupling relationship between disciplines, and that the model uncertainty is purely due to some unknown model parameters, which makes SSA much easier.

To manage the complexity in the decision-making process, this paper presents a new strategy and mathematical framework for resource allocation in a multidisciplinary system, one that breaks the decision making into a sequential procedure. An MUA method [33,34] we developed earlier is applied to make the first decision (i.e., where). A multidisciplinary sensitivity analysis (MSSA) [31,32], which we developed to handle uncertain functional responses as the stochastic inputs in the sensitivity analysis (the stochastic inputs are scalar variables in the traditional sensitivity analysis literature), is applied to investigate the relative contributions of disciplinary responses to the aggregated uncertainty of system QOIs for the second decision (i.e., what). A type of correlation analysis derived from SRP modeling concepts is also developed as an aid to ensure the sparsity of the selected input settings. Finally, an enhanced preposterior analysis predicts the effectiveness of allocating experimental or computational resource, or a combination of both, to make the last decision (i.e., which). The remainder of the paper is organized as follows. Section 2 reviews the SRP-based model bias correction approach to quantify epistemic model uncertainty and its extension in multidisciplinary systems. Section 3 details the proposed framework for resource allocation, which is applied in Sec. 4 to a benchmark electronic packaging problem to illustrate how the uncertainty of system QOIs is reduced over the input space via a few iterations of resource allocation. Concluding remarks are provided in Sec. 5.

For clarity of the discussions in this paper, we make the following remarks:

  1. (1)

    The objective of this paper is to improve the global modeling capability of a multidisciplinary system, such that the epistemic model uncertainty of system QOIs is acceptable over the input space. A globally accurate model can be useful in many design activities other than direct optimization; for example, the simulation/emulation results of a model can be passed into a classification analysis for designers to understand the potentially feasible/infeasible input settings and to be more focused on the “regions of interest.” This line of research [35,36] is different from the objective-oriented sampling [3739] which specifically targets a local region where potential optimal design solution lies.

  2. (2)

    When referring to resources, we mean experiments and simulations. We use “experiments” in a broad context: it can be real physical experiments, or extremely high-fidelity simulations that (almost) can represent the reality. “Simulations” in this paper generally refer to results from physics-based models that are less accurate than experiments.

  3. (3)

    Emulators are nonphysics-based models that are built upon data from experiments and simulations. They are much less costly than direct simulations. In this work, an SRP modeling technique is used to create disciplinary emulators.

  4. (4)

    Experiments and simulations are conducted for individual disciplinary responses, but not for the system QOIs (which we consider practically unaffordable). Any full systemwise analysis is conducted using emulators.

  5. (5)

    The framework proposed in this paper is targeted at a multidisciplinary problem, although it can be simplified to work for a single discipline as well.

Review of Model Bias Correction in Multidisciplinary Systems

Model uncertainty is a significant type of epistemic uncertainty in a multidisciplinary system. To quantify model uncertainty, data from the computer simulations are first collected and then compared with the experimental data. Based on the differences in the response values over the two datasets, the original simulation model is “updated” to a more accurate emulator; meanwhile, the model uncertainty is quantified. We refer to this process as model bias correction. Subsequently, an extra experimental dataset is provided to validate the emulator. Model bias correction is conducted iteratively by refining the model and/or collecting additional data, until the emulator is trustworthy. In this section, we briefly review the SRP-based model bias correction technique and its application in multidisciplinary systems.

For a functional response y(x) that depends on a p-dimensional set of input variables x = (x1,…,xp), the following formulation [1,38] presents the relationship between the simulations and the experiments:
(1)

where the superscript “e” denotes the experimental response of y(x), and the superscript “m” denotes the simulation model of the same response. δ(x) is the discrepancy function that represents the model bias, and ε ∼ N(0,λ) is a random error accounting for the experimental variability, assumed to be normally distributed with unknown variance λ. The situation where ε does not have zero mean (i.e., a so-called experimental bias or “systematic error”) would be more complicated and is beyond the scope of this paper.

Suppose that a set of M simulation response observations {xim,ym(xim):i=1,,M}  and a set of N experimental response observations {xie,ye(xie):i=1,,N}  have been collected. Since xim (i=1,,M) and xie (i=1,,N) are not necessarily the same input settings, and likewise, ym(xim) and ye(xie) may not be directly compared to obtain the discrepancy function, an SRP modeling technique is adopted to bridge the gap and enable the comparison. An SRP can be viewed as a collection of random variables distributed over the input space that collectively have certain multivariate spatial correlation structure. Specifically, Gaussian process (GP) modeling, which assumes the random variables to follow a multivariate normal distribution, has emerged as one of the most popular metamodeling approaches due to its ability to capture functional nonlinearity, perfectly interpolate deterministic surfaces, etc. [40,41]. In this study, we adopt GP modeling and model the simulation and the discrepancy function as GPs such that
(2)

In the preceding, hm(x) and hδ(x) denote vector-valued functions whose elements are some user-specified regression basis functions (e.g., constant, linear, quadratic, etc.), and βm and βδ are two vectors of regression coefficients associated with hm(x) and hδ(x), respectively. Their products, hm(x)Tβm and hδ(x)Tβδ, construct the prior mean functions of ym(x) and δ(x). σm and σδ are the prior standard deviations, and ωm=  [ω1m, ω2m,, ωpm]T and ωδ=  [ω1δ, ω2δ,...,ωpδ]T are the spatial correlation parameters used to capture the nonlinearity of the functions. Vm(x,x′) and Vδ(x,x′), which depend on σm, σδ, ωm, and ωδ, are the covariance functions of ym(x) and δ(x), respectively.

Combining Eqs. (1) and (2) yields that the collected data follow a multivariate normal distribution
(3)

where d={ym(x1m),,ym(xMm),ye(x1e),,ye(xNe)}T is the collected response data from simulations and experiments, and β = [(βm)T, (βδ)T]T is the collection of regression coefficients. H is a matrix generated by hm(·) and hδ(·), and Vd is generated by Vm(·,·) and Vδ(·,·) (see Refs. [8] and [41] for full derivation and discussion).

Let ϕ = {βm, σm, ωm, βδ, σδ, ωδ, λ} denote the collection of unknown parameters of the GP models, referred to as hyperparameters. The key to estimating the discrepancy function and subsequently enabling a better model prediction is to estimate these hyperparameters. They are usually estimated via maximum likelihood estimation, i.e., maximizing the likelihood function p(ϕ|d). Subsequently, we can obtain a prediction of ye(x) at any untested design of interest x. The relationship between ye(x) and its prediction can be written as
(4)
where ŷe(x) denotes the updated mean prediction that takes into account the discrepancy function δ(x). Z(x) is a zero-mean GP, i.e., Z(x)|x ∼ N(0, σZ2(x)), which accounts for the uncertainty in the prediction. σZ2(x) is also known as the mean square error of the mean prediction ŷe(x). The merit of using GP models is that we are able to obtain analytical equations for calculating ŷe(x) and σZ2(x). For example,
(5)
where
(6)

Again, Refs. [8] and [41] provide full details of calculating ŷe(x) and σZ2(x).

Figure 1 illustrates a typical result from GP-based model bias correction, in which there is originally a significant model bias when comparing the simulation data (the triangles) and the experimental data (the circles), especially on the left and right boundaries of the x domain. Using GP modeling, we are able to obtain an updated model (the solid purple curve) that matches the experimental data, while maintaining the general functional trend learned from the simulation data. Besides, GP modeling enables the quantification of interpolation uncertainty at the locations that have not yet been simulated, represented by a prediction interval (PI) (the shaded region) calculated from σZ2(x).

Fig. 1
Illustration of GP-based model bias correction
Fig. 1
Illustration of GP-based model bias correction
Close modal

GP-based model bias correction can be applied to multidisciplinary systems. A notional multidisciplinary system is depicted in Fig. 2. Suppose it involves a total of ND disciplines associated with a collection of individual disciplinary input variables xind = {xi: i = 1,…, ND} and some input variables xs that are shared across at least two disciplines. The disciplines are coupled via linking variables uij (i, j = 1,…, ND), which serve as output from the ith discipline and input to the jth discipline. If both uij and uji are nonempty, the relation between the ith and jth disciplines is referred to as feedback coupling, and otherwise, feed-forward coupling. We denote the collection of all linking variables that are output from and input to the ith discipline as ui. and u.i, respectively, i.e., ui. = {uij: j = 1, …, ND, j ≠ i} and u.i = {uji: j = 1, …, ND, j ≠ i}. The system QOIs ysys depend on individual disciplinary responses yind = {yi: i = 1, …, ND} collected from the responses of ND disciplines, through the system analysis model.

Fig. 2
A notional multidisciplinary system with epistemic model uncertainty
Fig. 2
A notional multidisciplinary system with epistemic model uncertainty
Close modal
Epistemic model uncertainty widely exists in multidisciplinary systems. For the ith discipline, the simulated response ui. may have uncertainty, which will introduce uncertainty to u.i and further influence ui. itself. Such “convolution” accumulates uncertainty in the disciplinary level and passes it on to the disciplinary outputs yi. Another complication is that the simulation model for yi may also involve epistemic model uncertainty. That is, even if u.i has no uncertainty, simulating the response yi still could involve uncertainty. Based on Eq. (4), we present the predictions of all the linking variables and the disciplinary output variables, respectively, as (for i = 1, 2,…, ND)
(7)
and
(8)

Assessing the cumulative uncertainty of system QOIs calls for an uncertainty propagation approach. Our recent development of a GP-based MUA method [33,34] addresses the issues of efficient uncertainty propagation in multidisciplinary systems, which is generically suitable for situations with both aleatory uncertainty (from input variables xind and xs) and epistemic uncertainty (from models). A version of the method that only considers epistemic uncertainty (which is the main focus of this paper) will be provided in Sec. 3.1.

Resource Allocation for Uncertainty Reduction in Multidisciplinary Systems

In this section, we present in detail the proposed resource allocation approach for reducing epistemic model uncertainty in a multidisciplinary system. An overview is provided in Fig. 3. In the preliminary step, model bias correction is conducted based on some existing data to construct emulators for disciplinary responses (linking the input variables uij and xj to the disciplinary outputs yi, for each i). In step 1, a space-filling strategy is used to explore the input space {xind, xs} by generating sufficient sample points and identifying the locations with unacceptable amounts of uncertainty with respect to the system QOIs ysys. The MUA method is applied to assess the aggregated epistemic model uncertainty of ysys at each sample point (Sec. 3.1). An MSSA then separates the impact of epistemic model uncertainty from different responses at each sample point (Sec. 3.2). Decisions about resource allocation (where, what, and which) are made in sequence in step 2, where we integrate the information from MUA, MSSA, and a preposterior analysis. Finally in step 3 as resources are allocated, another round of model bias correction is performed.

Fig. 3
Flowchart of the proposed resource allocation approach
Fig. 3
Flowchart of the proposed resource allocation approach
Close modal
Note that it may require a few iterations of the procedure in Fig. 3 before the epistemic model uncertainty is reduced to a satisfactory level. Since the variance of system QOIs ysys is an indicator of the amount of uncertainty, we specify the following stopping criterion in this paper:
(9)

The numerator is the standard deviation of ysys (at any given input setting {xind, xs}), and the denominator is the mean of the absolute value of ysys over the input space. Thus, γ is an intuitive measure of the prediction uncertainty that resembles the “coefficient of variation” in traditional statistics. The user can choose other variance-based stopping criteria, independent from the rest of procedure presented.

Exploring the Input Space and Evaluating the Impact of Epistemic Model Uncertainty.

In step 1, to find where in the input space is most influenced by epistemic model uncertainty, a straightforward treatment is to apply a space-filling strategy (for example, Monte Carlo uniform sampling, Latin hypercube sampling (LHS), Halton sequence, etc.) and generate Ns samples over the input space: (xind(k),xs(k))=(x1(k),,xND(k),xs(k)),k=1,, Ns. Next, for each sample point, the GP-based MUA method [33,34] is applied to evaluate the mean and variance of system QOIs ysys. The merit of the MUA method is that it utilizes the structure of GP emulators and enables an efficient analytical uncertainty analysis. The following discussion in this subsection is for every sample input that is generated, and hence for notational simplicity, we omit the superscript “(k).”

Given a specific input setting (xind, xs), a first-order Taylor expansion of ûi·e(·,·,·) and ŷie(·,·,·) in Eqs. (7) and (8) is applied, which leads to (for i = 1,…, ND)
(10)
(11)
where μui., μu.i, and μyi denote the means of ui., u.i, and yi, respectively. The derivatives on the right-hand side, while not explicitly written out, are taken at the mean of the variables, i.e., (xind, xs, μu). Therefore, the means and covariance matrices of the linking variables (i.e., all uij's) and the disciplinary outputs (i.e., all yi's) can be estimated by (for i = 1,…, ND)
(12)
(13)
where Σu, ΣZu, Σy, and ΣZy denote the covariance matrices of ue, Zu, ye, and Zy, respectively. The matrices A and B are
(14)
(15)

Since GP-based model bias correction provides analytical formulas to calculate ûi·e(·,·,·) and ŷie(·,·,·) (similar to Eq. (5)), the entries of matrices A and B also have analytical forms, which enables an efficient calculation.

Detailed derivations can be found in Refs. [33] and [34]. After deriving the means and covariance matrices of the disciplinary responses yind, the derivation of the mean and variance of the system QOIs ysys can be treated as a single-disciplinary uncertainty propagation problem and solved using any conventional method. From the variance of ysys, it can be easily discovered which regions have large uncertainty.

Separating the Impact of Epistemic Model Uncertainty From Disciplinary Responses.

Apart from assessing the aggregated impact of epistemic model uncertainty on ysys over the Ns sample points, one can actually separate the total impact into the contributions from different responses (uij's and yi's) using SSA. This information is particularly important for resource allocation in a multidisciplinary system, as it helps decision makers understand what the most important factors are that influence the system performance. Moreover, it will be shown in Sec. 3.3 that it also helps in choosing the most appropriate input settings that are most in need of additional resources.

The most widely used measure of variability in SSA is variance, because it is usually interpretable for practitioners. Building on the Sobol’ method [42,43], which is a variance-based Monte Carlo method for aleatory input variability, we developed a more comprehensive MSSA in Refs. [31] and [32] that will assess the impact of aleatory and epistemic uncertainties. Different from the Sobol’ method, MSSA has the capability of measuring the impact of functional responses that serve as stochastic inputs in the sensitivity indices (the stochastic inputs are scalar variables in the traditional Sobol’ method, although the output is a functional response). Particularly, we use local sensitivity analysis in this subsection, which focuses on examining the impact of uncertainties from different sources at a single input setting.

In the Sobol’ method, the variance of a random output variable Y can be decomposed into the contributions of a set of random input variables X1,…, Xp. The main sensitivity index (MSI) and the total sensitivity index (TSI) of a particular input Xi, which measure the main effect and the overall effect of Xi on the output Y, respectively, are given by
(16)
(17)
where Xi denotes all the input variables except Xi. The subscript in “Var” and “ E ” denotes the source of uncertainty; for example, VarXi() is the variance of the variable in the brackets due to the variation of Xi. Both MSI and TSI are between 0 and 1, with 0 indicating no effect and 1 the most dominant effect. The same idea can be extended to examining the relative contributions of epistemic model uncertainty. If we view any Zl∈{Zui., Zyi: i = 1,…, ND} (the random term that quantifies epistemic model uncertainty of a particular response in Eqs. (7) and (8)) in a similar way as Xi is treated in Eqs. (16) and (17), then the MSI and TSI of Zl on the system QOIs ysys can be defined as
(18)
(19)

where Zl = {Zui., Zyi: i = 1,…, ND}\Zl. The Sobol’ variance decomposition assumes that the uncertainty sources are uncorrelated, which is reasonable in our case because the epistemic uncertainty of the individual disciplinary models (i.e., Zl's) is quantified separately and not correlated with each other.

It is worth noting that although Eqs. (18) and (19) resemble the standard Sobol’ indices (Eqs. (16) and (17)), the calculation is much more complex due to the fact that the Z's are the stochastic “input variables” in our indices, but they are really stochastic functional responses over the input variables (i.e., random processes rather than random variables). In standard Sobol’ indices, the output is a functional response over the stochastic inputs, but the stochastic inputs are scalar variables and not functional responses themselves. For example, a sampling-based treatment to calculate the MSI of Zl as defined in Eq. (18) is a trilevel procedure: in the outer level, one needs to generate multiple Monte Carlo realizations of Zl (which are, by the nature, realizations of a function); in the intermediate level, for each realization of Zl, one needs to generate realizations of Zl (realizations of a set of functions); and finally in the inner level, given the realizations of all Z's, computations are needed to evaluate EZl(ysys|Zl). Therefore, the overall computation could be extremely expensive. Likewise for the process of calculating the TSI of Zl.

The MUA method developed in Sec. 3.1 significantly speeds up the analysis. By applying Eqs. (12) and (13), EZl(ysys|Zl) is directly obtained without having to sample Zl, which eliminates the need of the outer level aforementioned. If necessary, one can further simplify the analysis and obtain rough approximations of MSI and TSI by linearizing EZl(ysys|Zl) and EZl(ysys|Zl) as [ysys|Zl0,Zl] and [ysys|Zl0,Zl], respectively, i.e., fixing the randomly varying Z terms to their zero means, which then eliminates the need of the intermediate level as well. Such linearization would certainly lead to a less accurate sensitivity assessment, but it is very useful for a fast analysis in a large complex system.

Choosing Input Settings and Corresponding Responses for Resource Allocation.

To decide where to allocate more resources, a simple approach is to choose a few samples (up to the budget) with the largest variance of ysys among the Ns samples generated in Sec. 3.1. However, this is not the best approach, since multiple sample points may happen to fall inside the same region that has a large epistemic model uncertainty, especially when Ns is large. In such case, allocating resource to a single point inside this region should be sufficient for reducing the uncertainty. A good decision-making criterion is needed to detect whether two points with large uncertainty are “sufficiently far away” from each other, so that our final selection of samples would be widely spread over the input space without dense clustering. In this subsection, we propose a form of correlation analysis that utilizes information from the GP-based correlation, which measures the spatial proximity between two input settings, to select the best subset of input settings to allocate more resources.

Consider two sample points, denoted by (xind(k),xs(k)) (k=1,2) without loss of generality, from the Ns samples we generated in Sec. 3.1. We start by calculating their sensitivity indices and detecting the main contributing sources of uncertainty, respectively, at these two input locations. If their major sources of uncertainty are different, i.e., if Var[ysys(xind(k),xs(k))] (k=1,2) are mostly induced by model uncertainty from two different responses, then it is concluded that (xind(1),xs(1)) and (xind(2),xs(2)) are “far away” from each other and reside in different regions. On the other hand, if Var[ysys(xind(k),xs(k))] (k=1,2) turn out to be induced by uncertainty of the same response (for example, some response L(xL, xs, u.L)∈{ui., yi: i = 1,…, ND} associated with model uncertainty term Zl), then the two points may or may not be far away. We need to proceed to evaluate their correlation coefficient in the GP emulator of L, defined as
(20)

In the denominator, σmL2, σδL2, and λL are the prior variances of the simulation model, the discrepancy function, and the experimental variance of response L, respectively; their sum, according to Eq. (2), is the prior variance of experimental response Le at (xind(k),xs(k)) (k=1,2). In the numerator, VmL(·,·) and VδL(·,·) are the covariance functions of the simulation model and the discrepancy function of L, respectively, which can be directly calculated from Eq. (2); their sum is the prior covariance of the two sample points. μuL(k)(k=1,2) are the estimated values of linking variables (as inputs to the response L). Therefore, ρ1,2 is the GP-based correlation coefficient, which is an indicator of how much impact the two input settings have over each other; in other words, how likely it is that they would yield a similar value of response and/or share a similar amount of uncertainty. It can be seen that this type of correlation coefficient is easy to compute and yet comprehensively combines the information of the absolute spatial distance between two sample points, the roughness of the response (quantified by ωm and ωδ in Eq. (2)), and the general trend of the response (quantified by βm, βδ, σm, and σδ in Eq. (2)). If ρ1,2 is close to 1 (or larger than a user-defined limit β), the two sample points are considered to be in the same local region, and we conclude that allocating resource to one of them should be sufficient.

The Appendix gives a pseudocode for choosing the proper sample points for resource allocation. Furthermore, the developed method allows us to decide at each chosen sample point what response to allocate more resource, by picking the response L that mostly influences ysys at this point.

Deciding the Type of Resource to Allocate for Each Chosen Sample Point.

Finally, the one remaining task is to decide which type of resource, simulation or experiment, to allocate for each chosen input setting and the corresponding chosen response at this input setting. To address this, in this subsection, we propose an enhanced preposterior analysis approach (see Fig. 4, based upon a similar idea in the earlier work [44,45]), which, prior to allocating the resources, can predict the posterior variance of ysys if the resources are actually gathered. Different from the existing work in the literature that focuses only on a single type of resource (usually experimental), the preposterior analysis proposed in this section provides insight for any combination of simulations and experiments.

Fig. 4
The preposterior analysis
Fig. 4
The preposterior analysis
Close modal

Following the analyses in Sec. 3.3, suppose we have already made the decision to allocate more resources to response L at input locations (xind(k),xs(k),μuL(k)) (k=1,,NL), and to response L′ at input locations (xind(k),xs(k),μuL(k)) (k=NL+1,,NL+NL), etc. The preposterior analysis first generates hypothetical simulation and/or experimental data and then evaluates the influence on the posterior variance of the system QOIs. The detailed procedure is as follows:

  1. (1)

    Suggest a resource allocation plan and generate hypothetical data.

    To assess the impact of conducting experiments at (xind(k),xs(k),μuL(k)) (k=1,,NLe) and conducting simulations at (xind(k),xs(k),μuL(k)) (k=NLe+1,,NL) for response L, hypothetical experimental and simulation data are generated accordingly at these locations. A preferred way is to generate them simultaneously considering their correlation. It should be noted that Le (i.e., the experimental response of L) at (xind(k),xs(k),μuL(k)) (k=1,,NLe) and Lm (i.e., the simulation response of L) at (xind(k),xs(k),μuL(k)) (k=NLe+1,,NL) follow a jointly normal distribution that can be determined from the already estimated hyperparameters of the GP models. The equation for the jointly normal distribution is similar to Eq. (3), i.e., d ∼ N(, Vd), where d here should be interpreted as the hypothetical data to be generated, and Vd is a covariance matrix whose entries are the covariances between selected input settings and/or between Le and Lm. Therefore, a set of hypothetical experimental and simulation data can be generated by drawing a sample from this jointly normal distribution.

  2. (2)

    Update the emulators after obtaining the hypothetical data.

    To calculate how the emulators will change after adding the hypothetical data (which we treat as real data in the preposterior analysis), one could do another round of model bias correction using the collection of real data and hypothetical data, although this could be time consuming. Alternatively, by assuming that the estimation of the hyperparameters of the GP models will not significantly change with a few additional data, one can simply obtain a new model prediction by reusing the already estimated values of the hyperparameters in Eq. (5). The symbols now have new meanings; for example, d and Vd in Eq. (5) are now referred to as the collection of real and hypothetical data and their covariance matrix, respectively.

  3. (3)

    Conduct uncertainty propagation using the new model prediction.

    After all the emulators for disciplinary responses are updated using the hypothetical data, uncertainty propagation as presented in Sec. 3.1 can provide information about how much the uncertainty of system QOIs ysys is changed.

  4. (4)

    Calculate the “expected” reduced uncertainty of system QOIs ysys.

    Note that steps 1–3 in this subsection are done under a specific set of hypothetical data. In order to achieve statistically meaningful results, steps 1–3 should be repeated by generating multiple sets of hypothetical data, which leads to an expected (averaged) reduced uncertainty of ysys.

The preposterior analysis can be conducted for any suggested resource allocation plan, and users can select a plan that balances effectiveness and cost. Practically, one can first analyze the expected uncertainty reduction under the plan that all additional resources are from simulations, and then check whether the criterion (9) will be satisfied. Those locations where this criterion is not likely to be satisfied after gathering new simulations should be considered for experimental measurements.

Case Study

To demonstrate the proposed resource allocation approach, we consider an electronic packaging design problem [46], a benchmark multidisciplinary problem that has been frequently studied in the literature [23,25,4749]. A circuit consisting of two resistors is mounted on a heat sink, and there are two coupled disciplines, i.e., the electrical and thermal subsystems, as demonstrated in Fig. 5. Eight input variables x1x8, five linking variables y6, y7, and y11y13, and a system QOI y1 are involved in this test problem. The numbering of the variables follows the same as used in the aforementioned works. Table 1 provides the physical meanings of the variables; a detailed problem statement that includes descriptions of all variables and the bounds of the design variables can be found in the referenced literature.

Fig. 5
Electronic packaging system
Fig. 5
Electronic packaging system
Close modal
Table 1

Physical meanings of the variables in the electronic packaging system

x1Heat sink width (m)
x2Heat sink length (m)
x3Fin length (m)
x4Fin width (m)
x5Nominal resistance #1 at temperature 20 °C (Ω)
x6Temperature coefficient of electrical resistance #1 (K−1)
x7Nominal resistance #2 at temperature 20 °C (Ω)
x8Temperature coefficient of electrical resistance #2 (K−1)
y1Negative of watt density (W/m3)
y6Power dissipation in resistor #1 (W)
y7Power dissipation in resistor #2 (W)
y11Component temperature of resistor #1 (°C)
y12Component temperature of resistor #2 (°C)
y13Heat sink volume (m3)
x1Heat sink width (m)
x2Heat sink length (m)
x3Fin length (m)
x4Fin width (m)
x5Nominal resistance #1 at temperature 20 °C (Ω)
x6Temperature coefficient of electrical resistance #1 (K−1)
x7Nominal resistance #2 at temperature 20 °C (Ω)
x8Temperature coefficient of electrical resistance #2 (K−1)
y1Negative of watt density (W/m3)
y6Power dissipation in resistor #1 (W)
y7Power dissipation in resistor #2 (W)
y11Component temperature of resistor #1 (°C)
y12Component temperature of resistor #2 (°C)
y13Heat sink volume (m3)

The original problem is a design optimization formulation to maximize the watt density (i.e., to minimize y1). To test our proposed approach, we modify the original problem as follows. Instead of finding an optimal solution, we now aim at achieving a good predictive model of y1 over the whole design space of x1x8. Since the thermal discipline is much more complex than the electrical discipline in that it requires a finite difference strategy to calculate the component temperatures, we assume that the models to evaluate y11, y12, given {x1, x2, x3, x4, y6, y7}, are inaccurate and require model bias correction. In the first iteration of resource allocation, we collect simulation and experimental data and conduct model bias correction for y11 and y12 (Sec. 4.1), and then evaluate the impact of epistemic model uncertainty on the system QOI y1 (Sec. 4.2), and then apply the proposed resource allocation approach to reduce the aggregated uncertainty on y1 (Sec. 4.3). The results from subsequent iterations are provided in Sec. 4.4.

Model Bias Correction for the Thermal Discipline (First Iteration).

We first collect data comprised of 40 “experimental” observations. Since the original benchmark problem does not provide any experimental data, we generate a set of data for y11 and y12 by adding random noise to the exact model provided in Ref. [46], which we treat as the experimental data. The 40 experimental runs are determined via an LHS over the six-dimensional input space {x1, x2, x3, x4, y6, y7}. We then collect data comprised of 40 simulation observations of y11 and y12 from low-fidelity simulation models that we build, which intentionally have a significant discrepancy δ(x) compared with the exact model in Ref. [46]. The 40 simulation runs are determined via a different LHS, so that their locations are completely different than those for the experimental runs. Note that {y6, y7} are input variables to the thermal discipline and yet essentially linking variables in the whole system; hence, we do not know beforehand their ranges. We assume in this stage that they are both within [0, 8] based on some preliminary system-level simulations (using the current “wrong” simulation models). We will show later that their actual bounds are slightly beyond that.

The data points are shown in Fig. 6, with only the {y6, y7} plotted for illustration purpose. The GP-based model bias correction in Sec. 2.1 is applied to integrate the two sets of data and to subsequently obtain the updated emulators of y11 and y12. An extra validation step is performed with 25 reserved experimental data; a comparison between the updated emulators and the validation data, together with the 95% PIs, is provided in Fig. 7. It can be seen that for both responses, 40 experimental data + 40 simulations are barely sufficient for bias correction: at some validation points, the PIs are large, and a few validation data are even outside the 95% PI. It can be anticipated that such uncertainty will propagate and influence the system QOI y1.

Fig. 6
Design of experiments for model bias correction in the thermal discipline: 40 experiments + 40 simulations sparsely over the six-dimensional space {x1, x2, x3, x4, y6, y7}. The data points are plotted by projecting them onto {y6, y7} for illustration.
Fig. 6
Design of experiments for model bias correction in the thermal discipline: 40 experiments + 40 simulations sparsely over the six-dimensional space {x1, x2, x3, x4, y6, y7}. The data points are plotted by projecting them onto {y6, y7} for illustration.
Close modal
Fig. 7
Comparison between the validation data and the updated emulators of (a) y11 and (b) y12 after model bias correction
Fig. 7
Comparison between the validation data and the updated emulators of (a) y11 and (b) y12 after model bias correction
Close modal

Selection of Input Settings and Responses for Resources Allocation (First Iteration).

In this step, 2000 LHS samples are generated to explore the input space of x1x8 (see Fig. 8). The MUA method described in Sec. 3.1 evaluates the aggregated uncertainty of y1 at each sample point. As a reminder, the analysis in this step only relies upon the emulators that we just built in Sec. 4.1, and thus, the computations for 2000 samples are not at all prohibitive.

Fig. 8
Two thousand LHS samples to explore the eight-dimensional design space x1–x8 (projected onto {x1, x2} for illustration), and the final selected samples for resource allocation
Fig. 8
Two thousand LHS samples to explore the eight-dimensional design space x1–x8 (projected onto {x1, x2} for illustration), and the final selected samples for resource allocation
Close modal

Uncertainty acceptance criterion α% in Eq. (9) is set to be 10%. It turns out that there are only 11 samples (the highlighted blue rectangle and red circles in Fig. 8) out of 2000 that violate the acceptance criterion, which indicates an overall acceptable amount of uncertainty in the design space. However, the largest γ in Eq. (9) is 47.31%, which shows that in some local areas, the uncertainty is huge. Using the correlation check presented in Sec. 3.3 (with the correlation limit β set to be 0.95), 10 samples (the red circles) out of 11 are selected for allocating more resources.

The result of MSSA at the ten selected input settings is provided in Fig. 9, based on which we decide to allocate more resources to y11 at input settings #1–3, 5, and 10 (since at these points the MSI of y11 is larger than that of y12), and to y12 at the rest of input settings. Note that here we only plot the MSI of y11 and y12. It is because they are only two uncertainty sources in this problem, and based on the definitions of MSI and TSI in Eqs. (18) and (19), we have

Fig. 9
MSSA of the selected ten input settings
Fig. 9
MSSA of the selected ten input settings
Close modal
(21)

We omit the plots of TSI as MSI offers sufficient information from sensitivity analysis.

Preposterior Analysis to Decide the Type of Resources to Allocate (First Iteration).

We now apply the preposterior analysis to decide which type of resources to allocate, for each of the ten selected input settings. A total of 500 random sets of data are drawn from the joint normal distribution of y11m at input settings #1–3, 5, and 10, and y12m at input settings #4 and 6–9, which compose the hypothetical simulation data (a 500 × 10 matrix). This joint distribution comes from the GP models of y11m and y12m built in Sec. 4.1. By adding one set of hypothetical data (one row of the matrix), the emulators for y11 and y12 are both updated (hypothetically), based on which the aggregated uncertainty of system QOI y1 (presented by γ in Eq. (9)) is reevaluated (again, for each of the ten selected input settings). The predicted reduced uncertainty of y1 after gathering more simulation data is taken to be the average of the reevaluated γ over the 500 hypothetical sets of simulation data.

The result of the preposterior analysis is shown in Fig. 10. The uncertainty of y1 at most of the input settings (#2, 3, and 6–10) is expected to be reduced to the acceptable level (α% = 10% in Eq. (9)); however, points #1, 4, and 5 still do not satisfy the uncertainty criterion. Therefore, instead of allocating simulation data to all the ten points, we should rather allocate experimental data to #1, 4, and 5.

Fig. 10
Preposterior analysis: comparison between the current uncertainty and the expected reduced uncertainty (calculated by taking the average of the reduced uncertainty under 500 sets of hypothetical simulation data)
Fig. 10
Preposterior analysis: comparison between the current uncertainty and the expected reduced uncertainty (calculated by taking the average of the reduced uncertainty under 500 sets of hypothetical simulation data)
Close modal

To sum up, our decision for resource allocation after the first iteration of analysis is:

  1. (1)

    conduct simulations of y11 at points #2, 3, and 10 and y12 at points #6–9

  2. (2)

    conduct physical experiments of y11 at points #1 and 5 and y12 at point #4

Decisions for Resource Allocation (Second–Fourth Iterations).

The process of resource allocation is iterative. The 2000 LHS samples in each iteration are different, and hence help to thoroughly explore the whole design space. It turns out that after four iterations, all of the LHS samples generated satisfy the uncertainty limit specified in Eq. (9). The locations and types of the added resources and the corresponding responses after each iteration are provided in Fig. 11. The number of total samples (for model bias correction) after each iteration is provided in Table 2.

Fig. 11
Locations and types of the allocated resources for y11 and y12: (a) y11, first iteration, (b) y11, second iteration, (c) y11, third iteration, (d) y11, fourth iteration, (e) y12, first iteration, (f) y12, second iteration, (g) y12, third iteration, and (h) y12, fourth iteration. Data are over the six-dimensional space {x1, x2, x3, x4, y6, y7} but projected onto {y6, y7} for illustration.
Fig. 11
Locations and types of the allocated resources for y11 and y12: (a) y11, first iteration, (b) y11, second iteration, (c) y11, third iteration, (d) y11, fourth iteration, (e) y12, first iteration, (f) y12, second iteration, (g) y12, third iteration, and (h) y12, fourth iteration. Data are over the six-dimensional space {x1, x2, x3, x4, y6, y7} but projected onto {y6, y7} for illustration.
Close modal
Table 2

Number of total samples for model bias correction after each iteration

Iteration #y11ey11my12ey12m
040404040
142434144
243494449
344504552
444514653
Iteration #y11ey11my12ey12m
040404040
142434144
243494449
344504552
444514653

In Fig. 11, it is seen that although {y6, y7} are initially assumed to be within the range of [0, 8]2 (discussed in Sec. 4.1), the actual range for {y6, y7} is slightly beyond that and there are some new samples selected in the later iterations that reach the region where y6 > 8 or y7 > 8. We consider it a merit of the proposed method that it can gradually detect the region where we might have overlooked as the models get improved.

Beyond considering only physical experiments in existing literature, we treat computer simulations as yet another useful tool to reduce the epistemic model uncertainty, and this example showcases the merit of allocating computational resources. As shown in Table 2, more than 70% of the allocated resources (24 out of 34) are from computer simulations. We believe that it is because computer simulations help reduce the uncertainty due to the lack of data, and potentially also help discover the fundamental features of disciplinary responses. One can imagine that in an ideal situation when simulation models are perfect, uncertainty reduction can be achieved by conducting only simulations without help from physical experiments.

Another observation from this case study is that, to achieve a globally satisfactory model for QOIs in a multidisciplinary system does not require the disciplinary subsystem emulators to be globally satisfactory in their own input domains. In other words, the allocated resources to disciplinary subsystems do not necessarily cover the input space of disciplinary models. For example, it can be seen from Fig. 11 that all the resources allocated to y11 are clustered in a narrow band where y7 is small, and that the resources allocated to y12 are clustered in a narrow band where y6 is small. It is due to the complex coupling relationship between different disciplines in the system; many combinations of {y6, y7} are not achievable when the two disciplines couple together, and hence, there is no need to improve the modeling capability over those unachievable regions.

Conclusions

In this paper, we develop a new approach for resource allocation in multidisciplinary systems to reduce the aggregated epistemic model uncertainty of system QOIs. The novelty of the approach lies in breaking down the complex resource allocation problem that involves multiple decisions into a sequential set of decisions to answer questions of where (sampling locations), what (disciplinary responses), and which (simulations versus experiments). The proposed sequential decision-making strategy manages the complexity in creating globally accurate surrogate models for each subsystem in multidisciplinary analysis and design optimization by allocating resources (samples) that have the largest impact on reducing the uncertainty in predicting system QOI. After quantifying disciplinary epistemic model uncertainty using GP-based model bias correction, we first explore the input space of the whole system to identify the locations (where) with unacceptable amounts of uncertainty with respect to the system QOIs. In this step, the MUA method provides a fast and analytical uncertainty analysis. Next, the identification of critical responses (what) at the selected inputs is done using an MSSA we developed earlier to evaluate the relative contribution of each functional response. Meanwhile, the proposed correlation analysis uses the information from GP-based correlation to determine the proximity of input settings and to ensure that the selected input settings are sparsely located. Finally, decisions are made about which type of resources to allocate to the critical responses at the chosen input locations, via an enhanced preposterior analysis that predicts the effect of gathering more resources prior to the actual resource allocation. The preposterior analysis, unlike any preceding work in the literature, is applicable to any combination of experiments and simulations. The proposed method is applied to a benchmark electronic packaging problem to reduce the aggregated epistemic model uncertainty of the system QOI.

The proposed approach has the following advantages. First, it considers not only the physical experiments but also the computer simulations. We demonstrate in the case study that in many cases, the physical experiments are not really necessary, if the epistemic model uncertainty mainly comes from lack of data. Second, because the proposed approach strategically breaks resource allocation into a sequential process, the decision making is much more tractable. Third, the method is efficient for complex multidisciplinary analysis. Except for the stage when the simulation or physical experiments are collected, all the analyses are conducted based on inexpensive GP emulators. The MUA and MSSA methods further accelerate the procedure by providing analytical formulas. Last but not least, while we do not elaborate it, the proposed framework can be simplified to work for a single discipline as well. In that case, MUA and MSSA may not be useful as a single discipline typically involves a single response; however, the GP-based correlation analysis and the preposterior analysis are still applicable.

The limitation of the work is that it does not explicitly consider the costs of simulations and experiments, although users can adjust the tunable preference parameters of the uncertainty acceptance level and correlation coefficient of samples based on the total budget. Our future research will include the budget as a constraint in the decision-making process of resource allocation.

Acknowledgment

The grant support from the National Science Foundation (CMMI-1233403 and CMMI-1537641) is greatly acknowledged. Shishi Chen's predoctoral visit at Northwestern University is sponsored by the China Scholarship Council.

Nomenclature

d =

collected data

hm(·), hδ(·) =

predefined basis function for polynomial regression of mean function of simulation model GP/discrepancy function GP

MSI =

main sensitivity index

ND =

number of disciplines

TSI =

total sensitivity index

uij =

linking variables, output from the ith discipline and input to the jth discipline

Vm(·,·), Vδ(·,·) =

covariance function of simulation model GP/discrepancy function GP

x =

(x1,…, xp) = p-dimensional input variable

xi, xind =

disciplinary input variables of the ith discipline/all ND disciplines

xs =

shared input variables across two or more disciplines

yi, yind =

disciplinary outputs of the ith discipline/all ND disciplines

ysys =

system QOIs

ym(·), ye(·) =

simulation/experimental response

ŷe (·) =

mean prediction of the experimental response

Z(·) =

a random quantity representing model uncertainty

Zl =

model uncertainty term for model L

Z∼l =

model uncertainty terms for models except L

α% =

predefined uncertainty limit for γ(·,·)

β =

predefined limit for ρ

βm, βδ =

coefficients for polynomial regression of mean function of simulation model GP/discrepancy function GP

γ(·,·) =

measure of uncertainty at a particular input setting

δ(·) =

model discrepancy function

ε =

experimental error

μ =

mean vector; subscript, if available, denotes a specific random quantify, e.g., μui. refers to the mean of ui.

ρ =

correlation coefficient

σZ2 (·) =

variance of Z(·)

Σ =

covariance matrix; subscript, if available, denotes a specific random quantify, e.g., Σu refers to the covariance ue

ϕ =

hyperparameters of GP

Appendix

Pseudocode for Choosing the Sample Points in Sec. 3.3.

Given: (xind(k),xs(k)), k = 1,…, Ns from Section 3.1 
1. Sort k by Var[ysys(xind(k),xs(k))] in descending order 
2. Initialize α, β, list1 =  {ξ|(xind(ξ),xs(ξ))violates Eq. (9)}, list2 = {1} 
3. for all ξlist1 
4.  for all ηlist2 
5.   find response L that contributes most to Var[ysys(xind(ξ),xs(ξ))] 
6.   find response L′ that contributes most to Var[ysys(xind(η),xs(η))] 
7.   if L ≠ L′, then 
8.    set flag(ξ,η) = 1 
9.   elseif ρξ,η < β [Eq. (20)], then 
10.    set flag(ξ,η) = 1 
11.   else 
12.    set flag(ξ,η) = 0 
13.   endif 
14.  endfor 
15.  if flag(ξ,η) = 1, ∀ηlist2, then 
16.   add ξ to list2 
17.  endif 
18. endfor 
Given: (xind(k),xs(k)), k = 1,…, Ns from Section 3.1 
1. Sort k by Var[ysys(xind(k),xs(k))] in descending order 
2. Initialize α, β, list1 =  {ξ|(xind(ξ),xs(ξ))violates Eq. (9)}, list2 = {1} 
3. for all ξlist1 
4.  for all ηlist2 
5.   find response L that contributes most to Var[ysys(xind(ξ),xs(ξ))] 
6.   find response L′ that contributes most to Var[ysys(xind(η),xs(η))] 
7.   if L ≠ L′, then 
8.    set flag(ξ,η) = 1 
9.   elseif ρξ,η < β [Eq. (20)], then 
10.    set flag(ξ,η) = 1 
11.   else 
12.    set flag(ξ,η) = 0 
13.   endif 
14.  endfor 
15.  if flag(ξ,η) = 1, ∀ηlist2, then 
16.   add ξ to list2 
17.  endif 
18. endfor 

The chosen sample points are stored in “list2.”

References

1.
Kennedy
,
M. C.
, and
O'Hagan
,
A.
,
2001
, “
Bayesian Calibration of Computer Models
,”
J. R. Stat. Soc. Ser. B
,
63
(
3
), pp.
425
450
.
2.
Apley
,
D. W.
,
Liu
,
J.
, and
Chen
,
W.
,
2006
, “
Understanding the Effects of Model Uncertainty in Robust Design With Computer Experiments
,”
ASME J. Mech. Des.
,
128
(
4
), pp.
945
958
.
3.
Kennedy
,
M. C.
, and
O'Hagan
,
A.
,
2000
, “
Predicting the Output From a Complex Computer Code When Fast Approximations Are Available
,”
Biometrika
,
87
(
1
), pp.
1
13
.
4.
Hasselman
,
T. K.
,
Yap
,
K.
,
Lin
,
C.-H.
, and
Cafeo
,
J. A.
,
2005
, “
A Case Study in Model Improvement for Vehicle Crashworthiness Simulation
,”
23rd International Modal Analysis Conference
, Orlando, FL, pp.
1
12
.
5.
Chen
,
W.
,
Xiong
,
Y.
,
Tsui
,
K.-L.
, and
Wang
,
S.
,
2008
, “
A Design-Driven Validation Approach Using Bayesian Prediction Models
,”
ASME J. Mech. Des.
,
130
(
2
), p.
021101
.
6.
Qian
,
P. Z. G.
, and
Wu
,
C. F. J.
,
2008
, “
Bayesian Hierarchical Modeling for Integrating Low-Accuracy and High-Accuracy Experiments
,”
Technometrics
,
50
(
2
), pp.
192
204
.
7.
Wang
,
S. C.
,
Chen
,
W.
, and
Tsui
,
K. L.
,
2009
, “
Bayesian Validation of Computer Models
,”
Technometrics
,
51
(
4
), pp.
439
451
.
8.
Jiang
,
Z.
,
Chen
,
W.
,
Fu
,
Y.
, and
Yang
,
R.-J.
,
2013
, “
Reliability-Based Design Optimization With Model Bias and Data Uncertainty
,”
SAE Int. J. Mater. Manuf.
,
6
(
3
), pp.
502
516
.
9.
Helton
,
J. C.
,
Johnson
,
J. D.
,
Sallaberry
,
C. J.
, and
Storlie
,
C. B.
,
2006
, “
Survey of Sampling-Based Methods for Uncertainty and Sensitivity Analysis
,”
Reliab. Eng. Syst. Saf.
,
91
(
10–11
), pp.
1175
1209
.
10.
Landau
,
D. P.
, and
Binder
,
K.
,
2005
,
A Guide to Monte Carlo Simulations in Statistical Physics
, 2nd ed.,
Cambridge University Press
,
New York
.
11.
Robert
,
C.
, and
Casella
,
G.
,
1999
,
Monte Carlo Statistical Methods
,
Springer
,
New York
.
12.
Green
,
L. L.
,
Lin
,
H.-Z.
, and
Khalessi
,
M. R.
,
2002
, “
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
,”
AIAA
Paper No. 2002-3140.
13.
Cao
,
H.
, and
Duan
,
B.
,
2004
, “
Uncertainty Analysis for Multidisciplinary Systems Based on Convex Models
,”
AIAA
Paper No. 2004-4504.
14.
Du
,
X.
,
Guo
,
J.
, and
Beeram
,
H.
,
2008
, “
Sequential Optimization and Reliability Assessment for Multidisciplinary Systems Design
,”
Struct. Multidiscip. Optim.
,
35
(
2
), pp.
117
130
.
15.
Guo
,
J.
, and
Du
,
X.
,
2010
, “
Reliability Analysis for Multidisciplinary Systems With Random and Interval Variables
,”
AIAA J.
,
48
(
1
), pp.
82
91
.
16.
Gu
,
X.
, and
Renaud
,
J. E.
,
2002
, “
Implementation Study of Implicit Uncertainty Propagation (IUP) in Decomposition-Based Optimization
,”
AIAA
Paper No. 2002-5416.
17.
Gu
,
X. S.
,
Renaud
,
J. E.
, and
Penninger
,
C. L.
,
2006
, “
Implicit Uncertainty Propagation for Robust Collaborative Optimization
,”
ASME J. Mech. Des.
,
128
(
4
), pp.
1001
1013
.
18.
Du
,
X.
, and
Chen
,
W.
,
2002
, “
Collaborative Reliability Analysis for Multidisciplinary Systems Design
,”
AIAA
Paper No. 2002-5474.
19.
Du
,
X.
, and
Chen
,
W.
,
2005
, “
Collaborative Reliability Analysis Under the Framework of Multidisciplinary Systems Design
,”
Optim. Eng.
,
6
(
1
), pp.
63
84
.
20.
Xiong
,
F.
,
Yin
,
X.
,
Chen
,
W.
, and
Yang
,
S.
,
2010
, “
Enhanced Probabilistic Analytical Target Cascading With Application to Multi-Scale Design
,”
Eng. Optim.
,
42
(
6
), pp.
581
592
.
21.
Sankararaman
,
S.
, and
Mahadevan
,
S.
,
2012
, “
Likelihood-Based Approach to Multidisciplinary Analysis Under Uncertainty
,”
ASME J. Mech. Des.
,
134
(
3
), p.
031008
.
22.
Liang
,
C.
, and
Mahadevan
,
S.
,
2013
, “
Stochastic Multidisciplinary Analysis With High Dimensional Coupling
,”
10th World Congress on Structural and Multidisciplinary Optimization
, Orlando, FL, May 19–24, pp.
1
6
.
23.
Du
,
X.
, and
Chen
,
W.
,
2000
, “
An Efficient Approach to Probabilistic Uncertainty Analysis in Simulation-Based Multidisciplinary Design
,”
AIAA
Paper No. 2000-0423.
24.
Du
,
X.
, and
Chen
,
W.
,
2000
, “
Methodology for Managing the Effect of Uncertainty in Simulation-Based Design
,”
AIAA J.
,
38
(
8
), pp.
1471
1478
.
25.
Du
,
X.
, and
Chen
,
W.
,
2002
, “
Efficient Uncertainty Analysis Methods for Multidisciplinary Robust Design
,”
AIAA J.
,
40
(
3
), pp.
545
552
.
26.
Jiang
,
X. M.
, and
Mahadevan
,
S.
,
2009
, “
Bayesian Hierarchical Uncertainty Quantification by Structural Equation Modeling
,”
Int. J. Numer. Methods Eng.
,
80
(
6–7
), pp.
717
737
.
27.
Jiang
,
X. M.
, and
Mahadevan
,
S.
,
2009
, “
Bayesian Structural Equation Modeling Method for Hierarchical Model Validation
,”
Reliab. Eng. Syst. Saf.
,
94
(
4
), pp.
796
809
.
28.
Sankararaman
,
S.
,
McLemore
,
K.
,
Mahadevan
,
S.
,
Bradford
,
S. C.
, and
Peterson
,
L. D.
,
2013
, “
Test Resource Allocation in Hierarchical Systems Using Bayesian Networks
,”
AIAA J.
,
51
(
3
), pp.
537
550
.
29.
Manring
,
N. D.
,
2003
, “
Sensitivity Analysis of the Conical-Shaped Equivalent Model of a Bolted Joint
,”
ASME J. Mech. Des.
,
125
(
3
), pp.
642
646
.
30.
Allaire
,
D.
,
He
,
Q.
,
Deyst
,
J.
, and
Willcox
,
K.
,
2012
, “
An Information-Theoretic Metric of System Complexity With Application to Engineering System Design
,”
ASME J. Mech. Des.
,
134
(
10
), p.
100906
.
31.
Jiang
,
Z.
,
Chen
,
W.
, and
German
,
B. J.
,
2014
, “
Statistical Sensitivity Analysis Considering Both Aleatory and Epistemic Uncertainties in Multidisciplinary Design
,”
AIAA
Paper No. 2014-2870.
32.
Jiang
,
Z.
,
Chen
,
W.
, and
German
,
B. J.
,
2016
, “
Multidisciplinary Statistical Sensitivity Analysis Considering Both Aleatory and Epistemic Uncertainties
,”
AIAA J.
,
54
(
4
), pp.
1326
1338
.
33.
Jiang
,
Z.
,
Li
,
W.
,
Apley
,
D. W.
, and
Chen
,
W.
,
2014
, “
A System Uncertainty Propagation Approach With Model Uncertainty Quantification in Multidisciplinary Design
,”
ASME
Paper No. DETC2014-34708.
34.
Jiang
,
Z.
,
Li
,
W.
,
Apley
,
D. W.
, and
Chen
,
W.
,
2015
, “
A Spatial-Random-Process Based Multidisciplinary System Uncertainty Propagation Approach With Model Uncertainty
,”
ASME J. Mech. Des.
,
137
(
10
), p.
101402
.
35.
Picheny
,
V.
,
Ginsbourger
,
D.
,
Roustant
,
O.
,
Haftka
,
R. T.
, and
Kim
,
N. H.
,
2010
, “
Adaptive Designs of Experiments for Accurate Approximation of a Target Region
,”
ASME J. Mech. Des.
,
132
(
7
), p.
071008
.
36.
Liu
,
Y.
,
Shi
,
Y.
,
Zhou
,
Q.
, and
Xiu
,
R.
,
2016
, “
A Sequential Sampling Strategy to Improve the Global Fidelity of Metamodels in Multi-Level System Design
,”
Struct. Multidiscip. Optim.
,
53
(
6
), pp.
1295
1313
.
37.
Jones
,
D. R.
,
Schonlau
,
M.
, and
Welch
,
W. J.
,
1998
, “
Efficient Global Optimization of Expensive Black-Box Functions
,”
J. Global Optim.
,
13
(
4
), pp.
455
492
.
38.
Schonlau
,
M.
,
Welch
,
W. J.
, and
Jones
,
D. R.
,
1998
,
Global Versus Local Search in Constrained Optimization of Computer Models
(Lecture Notes—Monograph Series, Vol.
34
),
Institute of Mathematical Statistics
,
Bethesda, MD
, pp.
11
25
.
39.
Arendt
,
P. D.
,
Apley
,
D. W.
, and
Chen
,
W.
,
2013
, “
Objective-Oriented Sequential Sampling for Simulation Based Robust Design Considering Multiple Sources of Uncertainty
,”
ASME J. Mech. Des.
,
135
(
5
), p.
051005
.
40.
Sacks
,
J.
,
Welch
,
W. J.
,
Mitchell
,
T. J.
, and
Wynn
,
H. P.
,
1989
, “
Design and Analysis of Computer Experiments
,”
Stat. Sci.
,
4
(
4
), pp.
409
423
.
41.
Rasmussen
,
C. E.
, and
Williams
,
C. K. I.
,
2006
,
Gaussian Processes for Machine Learning
,
The MIT Press
,
Cambridge, MA
.
42.
Sobol'
,
I. M.
,
1990
, “
On Sensitivity Estimation for Nonlinear Mathematical Models
,”
Mat. Model.
,
2
(
1
), pp.
112
118
(in Russian).
43.
Sobol'
,
I. M.
,
2001
, “
Global Sensitivity Indices for Nonlinear Mathematical Models and Their Monte Carlo Estimates
,”
Math. Comput. Simul.
,
55
(
1–3
), pp.
271
280
.
44.
Arendt
,
P. D.
,
2012
, “
Quantification and Mitigation of Multiple Sources of Uncertainty in Simulation Based Design
,” Ph.D. dissertation, Mechanical Engineering, Northwestern University, Evanston, IL.
45.
Arendt
,
P. D.
,
Apley
,
D. W.
, and
Chen
,
W.
,
2016
, “
A Preposterior Analysis to Predict Identifiability in Experimental Calibration of Computer Models
,”
IIE Trans.
,
48
(
1
), pp.
75
88
.
46.
Salas
,
A. O.
,
1995
, “
MDO Test Suite
,” Last accessed May 1, 2015, http://www.eng.buffalo.edu/Research/MODEL/mdo.test.orig/class2prob3.html
47.
Renaud
,
J. E.
, and
Gabriele
,
G. A.
,
1994
, “
Approximation in Nonhierarchic System Optimization
,”
AIAA J.
,
32
(
1
), pp.
198
205
.
48.
Padula
,
S. L.
,
Alexandrov
,
N.
, and
Green
,
L. L.
,
1996
, “
MDO Test Suite at NASA Langley Research Center
,”
AIAA
Paper No. 96-4028.
49.
Kodiyalam
,
S.
, and
Sobieszczanski-Sobieski
,
J.
,
2001
, “
Multidisciplinary Design Optimization—Some Formal Methods, Framework Requirements, and Application to Vehicle Design
,”
Int. J. Veh. Des.
,
25
(
1–2
), pp.
3
22
.