Abstract
The numerical accuracy of finite element analysis (FEA) depends on the number of finite elements used in the discretization of the space, which can be varied using the mesh size. The larger the number of elements, the more accurate the results are. However, the computational cost increases with the number of elements. In current practice, the experimenter chooses a mesh size that is expected to produce a reasonably accurate result, and for which the computer simulation can be completed in a reasonable amount of time. Improvements to this approach have been proposed using multifidelity modeling by choosing two or three mesh sizes. However, mesh size is a continuous parameter, and therefore, multifidelity simulations can be performed easily by choosing a different value for the mesh size for each of the simulations. In this article, we develop a method to optimally find the mesh sizes for each simulation and satisfy the same time constraints as a single or a double mesh size experiment. A range of different mesh sizes used in the proposed method allows one to fit multifidelity models more reliably and predict the outcome when meshes approach infinitesimally small, which is impossible to achieve in actual simulations. We illustrate our approach using an analytical function and a cantilever beam finite element analysis experiment.
1 Introduction
Computer experiments are widely used to simulate engineering applications, where physical experiments are prohibitively expensive or challenging to carry out. Statistical design and analysis of computer experiments came into prominence through the seminal works of Sacks et al. [1] and Currin et al. [2]. See the book by Santner et al. [3] for details. More recently, multifidelity modeling has become an important topic in computer experiments. For example, see the applications in satellite systems [4], laser melting [5], and polymers [6]. Multifidelity simulations are motivated by the fact that combinations of simulations with different levels of fidelity can be utilized to improve the system estimation overall, making the predictions more accurate and computationally efficient. Kennedy and O’Hagan [7] introduced a Gaussian process framework for multifidelity modeling, which is extended by other researchers [8,9], to name a few.
Space-filling designs [10] are commonly used for computer experiments, but the introduction of multiple fidelity calls for new approaches. Qian [11] proposed the idea of nested space-filling designs where the higher fidelity simulations are nested within the lower fidelity simulations. Sequential design methods that incorporate multifidelity have also been proposed in the literature [12–14]. However, they restrict the selection of fidelities to only a few discrete levels, which are insufficient when the fidelity can be changed continuously such as in finite element analysis (FEA).
FEA typically divides a larger geometric domain into smaller and simpler cells and then aims to solve partial differential equations on them instead. These cells are called mesh elements, and this representation is called meshing [15]. It is known that more mesh elements, i.e., finer meshes, lead to more accurate simulation results, but it will inevitably consume more computational power. Simulations with fewer mesh elements, though less accurate, are often cheaper to conduct. In practice, we can adjust the number of mesh elements in FEA, achieving a trade-off between accuracy and cost.
Thus, multifidelity simulations in FEA differs from the usual problems because the fidelity can be changed almost continuously using the mesh size or the number of elements. There has been some research in multifidelity modeling in FEA [16,17], where the density of finite elements is connected to fidelity. However, the challenge of constructing experimental design under these circumstances remains. Moreover, the computational expenditure as a constraint for the designs has not been well investigated. We aim to address them both in this work.
We propose a new method to construct the experimental design where each simulation has a different mesh number/size. The proposed method targets finite element simulations with uniform meshes, where mesh density can be directly calculated from the dimensions of the uniform mesh elements. The method is novel in that it integrates computational costs for simulations into the design strategy, which is a practical matter for engineers. We will describe in detail how the experimenter can choose different mesh sizes/mesh element numbers to complete the full set of simulations within the given computational budget. We would also demonstrate how simulations performed at various mesh sizes can be integrated to produce a predictive model over the entire experimental region that is more accurate than others acquired from simulations with one or two levels the of mesh size.
This article is organized as follows. We will briefly review the commonly used space-filling experimental design methods and multifidelity models in Sec. 2. We then develop our new design in Sec. 3. Finally, we demonstrate the performance of the new design in two applications in Sec. 4: a simulation study on a function with a scalar response and an FEA on beam deflection with a functional response. Some concluding remarks are made in Sec. 5.
2 Related Work
In this section, we will review a few existing works in space-filling designs and multifidelity modeling, which are pertinent to this work.
2.1 Experimental Design.
In the design of deterministic computer experiments, space-filling designs that spread out the design points in the experimental region are commonly used [3]. Johnson et al. [18] propose two strategies based on the distance between the design points: minimax designs minimize the maximum gap in the experimental region, whereas maximin designs maximize the minimum distance between the points. Since our work is closely related to maximin designs, we will explain it in a bit more detail. See Joseph [10] for a recent review on space-filling designs.
An issue with the maximin design is that some of the design points may project onto the same value for some input variables. This is undesirable because only a few inputs may be important, and therefore, replicated values in them are not useful in a deterministic computer experiment that has no random errors. Latin hypercube design (LHD) proposed by Ref. [19] is a solution to this problem. It ensures that the design points project onto n different values in each input variable. However, there can be many LHDs. Therefore, an optimum LHD can be obtained by maximizing the minimum distance among the points, which is called a maximin LHD [20]. Joseph et al. [21] proposed the maximum projection (MaxPro) design that tends to maximize the minimum distance among projections to all possible subsets of inputs.
To accommodate the idea of multifidelity in experiment design, we need to account for the situation where some of the design points will be more valuable than others due to different levels of accuracy. Nested LHDs are specifically tailored to such scenarios [11,22,23]. For example, for two fidelity levels, the set of design points as a whole is an LHD. Meanwhile, it contains a subset that is also an LHD (of smaller size). The whole set of points is used for the low-fidelity experiment, while the subset LHD is for the high-fidelity experiment.
Sequential design strategies for multifidelity simulations are also proposed in the literature [24–26]. In contrast, this article focuses on developing a fixed design strategy that is tailored for multifidelity finite element simulations. Interestingly, almost all of the sequential strategies need an initial set, and therefore, even if one is interested in sequential simulations, the fixed design developed in this work can be utilized for constructing the initial design.
2.2 Multifidelity Modeling.
Tuo et al.’s model is more flexible than the KOH model because each simulation is free to select a different mesh number/density parameter. Moreover, this model contains only 2p unknown correlation parameters as opposed to Kp unknown parameters in KOH, making the estimation easier when K > 2. However, it poses a challenge to the experimental design strategy. While space-filling design for system inputs is relatively well studied, determining the corresponding fidelity parameters is not. To meet this challenge, we propose a more flexible and versatile experimental design method that is accommodative to finite element simulations with various mesh densities and mesh element numbers. We call it multi-mesh experimental design (MMED), which is discussed in Sec. 3.
3 Multi-Mesh Experimental Design
To perform simulations given the number of mesh elements M, we need to make sure that the mesh arrangement is reasonable and the simulation is able to converge. Suppose FEA simulations within a range of and converge and produce reasonable results. Then if we were to perform the whole set of simulations at a single fidelity level, we can choose the upper bound as the mesh element number. If we were to pursue a multifidelity scenario, then choosing two different fidelity levels is a common approach. We can designate meshing with elements as the low-fidelity level, and that with elements as the high-fidelity level. In this scenario, we will be able to perform more simulations at compared to and incur the same computational time. That is, for each simulation performed at , we can do simulations at . For a multi-mesh experimental design, we will choose while at the same time ensure the total computational time is still within the budget.
MMED(p,M~,M¯,n¯): Multi-Mesh Experiment Design
Data:: System input dimension,
: Range of mesh element numbers,
: No. of simulations at affordable by budget.
Find suitable:
Obtain the number of design points from Eq. (15).
Space-filling design:
1. Obtain levels for mesh elements from Eq. (13).
2. Obtain an -run design in dimensions using a space-filling design such as MaxPro LHD [21].
3. Assign the input variables from the first columns and the mesh elements from the remaining column of the design.
Output: Experimental design .
4 Applications
In this section, we apply MMED to two applications: an analytical function and a cantilever beam deflection. To evaluate MMED, we compare it to two other design methods and their corresponding modeling practices. The first method is a set of space-filling points with simulations executed with the same number of mesh elements. We call it the single-mesh design as only one meshing arrangement is used throughout the finite element simulations. Under this setting, fidelity is not taken into account at all. The second method is a two-level nested Latin hypercube design [11] (briefly described in Sec. 2.1), where all simulations on level 1 are carried out with a mesh arrangement with fewer elements , while simulations on level 2 are conducted with more mesh elements . The computational budget is equally split between the two fidelity levels because running simulations in parallel from two separate solvers makes the whole experiment finish at the same time. We call it the double-mesh design.
For modeling the data from the single-mesh simulations, we fit a standard Gaussian process with a Gaussian covariance function. For double-mesh simulations, we use the KOH model [7] described at the beginning of Sec. 2.2. For MMED simulations, we use the nonstationary model of Ref. [16] described at the end of Sec. 2.2. The hyperparameters for all three models are estimated using maximum likelihood.
4.1 Analytical Function.

Visualization of Eq. (16) given different mesh settings: M = 25 (left) and M = 400 (right)

Visualization of Eq. (16) given different mesh settings: M = 25 (left) and M = 400 (right)
This procedure is effectively the same as finite element meshing, splitting the input plane into many smaller squares. The number of mesh elements used controls the scale of these squares. For example, M = 25 means the mesh grid contains 5 × 5 discrete points. The more mesh cells there are, the more accurate the surface becomes. Another plot of Eq. (16) is generated using M = 400 mesh cells, as shown in Fig. 1. The interpolated surface becomes much more accurate because there are more mesh elements.
To evaluate the performance of MMED, we first designate a reasonable range for the number of mesh elements N, with and . Then we specify the computational time constraint to be equivalent to the total time of running ten simulations at . Therefore, the single-mesh method would consist of ten simulations with . We use MaxPro LHD to find the ten design points in [0, 1]2. For the double-mesh method, we propose a two-layer Latin hypercube nested design, where the simulations in the first layer use mesh elements and those in the second layer use mesh elements. We split the budget into half for simulations in each layer. It leads to 45 simulations in the first layer and five in the second layer (more precise). The nested design is illustrated in Fig. 2, which is constructed using the MaxProAugment function in the R package MaxPro. For our multi-mesh method, we follow Algorithm 1 to construct the design. We can obtain n = 24 design points with different values of such that the total simulation time does not exceed the budget constraint. The scatter plots of the design points in inputs and mesh element numbers are shown in Fig. 3. We can see the generated designs are space filling in each of the 2D projections plotted. The histogram plots on the diagonal show that the points fill the (x1, x2) space uniformly, whereas more points are allocated to the region with a smaller number of mesh elements.
![The 50 nested design points generated by double-mesh method in [0, 1]2. (The black circles represent points in the first layer (less accurate), while the red crosses represent points in the second layer (more accurate).)](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/mechanicaldesign/145/6/10.1115_1.4056874/2/m_md_145_6_061703_f002.png?Expires=1744136648&Signature=ys0qtCmOYKFcngBuz5R~3cBheNrFTqduOoJFgwAbR9TVk4rJf~IFe7lL2eLA-NONaAVjPS0twmH7VUzPJe8AayR3-zA5j5fUG5Lgp33L7oiBCVetcmcwDzZHyDs52-z4W27zmEjcAiBxhgT49K1tnrbmm6NAJuuOF-QDWdwTlUDIg54BuDMDt2jo1WYyJKKZwAuOGoGuq~Bmc9e9zNbcWo5Dz8TGWRyUP9nowdZiQBgalo5Oe~YWaE9MqMOczUkqwJ8YroLcEUWJdxNbEZHQw-VH~NC72rlEvhPYKr~uEMupe--HACuoL6jHER~tV81Ztudvwmxg8zG2WI7Xs0Y7tQ__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
The 50 nested design points generated by double-mesh method in [0, 1]2. (The black circles represent points in the first layer (less accurate), while the red crosses represent points in the second layer (more accurate).)
4.2 Cantilever Beam Deflection Under Stress.
For the second application, we conduct static stress analysis on a cantilever beam using FEA simulations with cubical cells. The simulations are carried out using the abaqus software [32].
The deflection profile is measured across the span of the beam. We take the maximum deflection measurements along the beam at z = 2x2, 4x2, …, 200x2, which gives us 100 uniformly placed discrete points to form the deflection profile as the functional response. Since the beam is a cuboid and the mesh elements are set to be cubic, we use t = M−1/3 as the mesh density parameter. Overall, each simulation requires three input variables and one mesh density parameter.
Since the deflection profile is a functional response, it requires an additional variable, the measuring location on the span, to be put into the response model Eq. (6). We assume a Gaussian correlation function for this additional variable as well. The computations in functional Gaussian process modeling can be simplified using Kronecker products as described in Ref. [33].
For this application, suppose a reasonable range for mesh cell numbers is and . Let the time budget be equivalent to the total computational time of running eight simulations with . Similar to the previous application, the single-mesh design with eight points in the input space [0, 1]3 is obtained using MaxPro. The double-mesh design uses a two-layer nested design with half of the time budget allocated to each layer. The first layer consists of 32 simulations at , and the second layer contains four simulations at . Finally, for the multi-mesh method, n = 18 simulations with different values of are obtained using Algorithm 1, which maintain the same total time constraint.
We randomly draw 30 sets of system inputs x as the testing dataset. To obtain the corresponding deflection profile, we simulate each of these 30 sets with a corresponding finely meshed FEA model with M = 320,000. Under this setting, we mesh the beam using a 40 × 40 × 200 grid. With such a large number of mesh elements, the size of each mesh element is small. As a result, we presume the deflection measurements from these simulations to be sufficiently accurate and can serve as the “true” response. To evaluate the performance of the three methods, we compare the estimated deflection profiles by each of the methods at these testing points against the true profile , where s denotes the index of the testing point.
5 Conclusions
In this work, we have proposed an experimental design method that enables the experimenter to choose optimal mesh sizes for finite element simulations given a fixed time budget. We have shown that it outperforms the single-mesh and double-mesh approaches because the design is well coupled with the modeling method. The single-mesh approach does not take the concept of multifidelity into account, hampering its ability to take the effects of meshing into account. Kennedy and O’Hagan’s model [7] used in double mesh utilizes multifidelity, but it can predict the response only at the highest fidelity level used in the simulations. On the other hand, MMED naturally fits with the model proposed by Ref. [16], which helps to perform extrapolation and predict the true response that is impossible to achieve through simulations.
For future work, MMED can be refined and tailored to accommodate more complex finite element simulations where the meshing is no longer uniform, or the geometry is difficult to be easily described by the number of mesh elements generated alone. In FEA computer simulators such as abaqus and ls-dyna, nonuniform meshes can often be generated by specifying mesh properties such as average or maximum/minimum sizes. Therefore, we expect MMED to remain effective for these scenarios, although this needs to be validated with complex finite element simulations. For nonuniform and adaptive mesh assignments, other variables will impact the computational time and simulation accuracy on top of mesh density. For instance, chordal errors referring to the quality of mesh approximation to true geometry can be considered as one such parameter. This would lead to multiple tuning parameters controlling the computational cost jointly, which goes beyond the scope of the current work where only one parameter is involved.
Acknowledgment
This work was supported by an LDRD grant from Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the US Department of Energy’s National Nuclear Security Administration (Contract No. DE-NA0003525). This article describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the article do not necessarily represent the views of the US Department of Energy or the United States Government. Wu was also supported by NSF DMS-1914632.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.