Abstract

Recently, there has been an increasing use of advanced modeling and simulation in the nuclear domain across academia, industry, and regulatory agencies to improve the realism in capturing complex and highly spatiotemporal phenomena within the probabilistic risk assessment (PRA) of existing nuclear power plants (NPPs). Advanced modeling and simulation have also been used to accelerate the risk-informed design, licensing, and operationalization of advanced nuclear reactors. Validation of simulation models traditionally relies on empirical validation approaches which require enough validation data. Such validation data are, however, usually costly to obtain in the contexts of the nuclear industry. To overcome this challenge and to effectively support the use of simulation models in PRA and risk-informed decision-making applications, a systematic and scientifically justifiable validation methodology, namely, the probabilistic validation (PV) methodology, has been developed. This methodology leverages uncertainty analysis to support the validity assessment of the simulation prediction. The theoretical foundation and methodological platform of the PV methodology have been reported in the first paper of this two-part series. The purpose of this second paper is to computationalize the PV methodology, embedded in an integrated PRA framework, and apply it for a hierarchical fire simulation model used in NPP Fire PRA.

1 Introduction

Fire risk is a considerable concern to nuclear utilities as it makes a substantial contribution to total core damage frequency [1]. To improve the fire protection of nuclear facilities, a risk-informed, performance-based approach was introduced in 2005, adopting the National Fire Protection Association (NFPA) 805 Standard [2]. In the NFPA 805 approach, the utilities have an option to utilize Fire probabilistic risk assessment (PRA) to evaluate plant-specific fire risk and vulnerability. State-of-the-art methodologies, tools, and data for NPP Fire PRA in the U.S. are documented in NUREG/CR-6850, EPRI 1011989 “EPRI/NRC-RES Fire PRA Methodology for Nuclear Power Facilities” [3,4] and its supplement [5], as well as a series of subsequent NUREG documents [624] that update the data and models for specific steps in the NUREG/CR-6850 methodology. For brevity, the collection of these documents is hereinafter referred to as the “current Fire PRA methodology.”

In spite of the advances in Fire PRA research and implementation in the last few decades, the current Fire PRA methodology can still lead to risk overestimation due to the use of excessively conservative input parameters and assumptions [25]. Five major areas for improving realism in the current Fire PRA methodology were identified [25]: (i) fire ignition frequency, (ii) fire progression and damage modeling, (iii) interaction between fire progression and manual suppression, (iv) circuit failure analysis, and (v) postfire human reliability analysis. Lack of realism in Fire PRA may mask identification of critical risk-contributing factors and result in unnecessary and costly plant modifications. To improve the quality of risk-informed decision-making, the degree of realism in Fire PRA should be improved.

There have been studies that utilized dynamic PRA approaches [2628] for improving the realism of the current Fire PRA methodology. Existing dynamic PRA studies, however, have not yet been able to demonstrate feasibility for use in a real plant (or for a realistic sequence of events). The dynamic PRA studies have challenges in areas such as (a) completeness in exploration of the solution space, (b) aggregation, interpretation, and communication of the large volume of outputs, and (c) validation of dynamic accident scenarios and underlying physical models [29,30]. Furthermore, due to the widespread use of classical PRA by the nuclear industry and the regulatory agency, the transition to a fully dynamic PRA would require a significant investment of resources. As a more feasible alternative, the authors of this paper have developed the integrated PRA (I-PRA) methodology [25,31].

In I-PRA, advanced simulation models of underlying physical and human/organizational phenomena with high resolution are developed and then integrated with an existing plant PRA model through a probabilistic interface. Compared to a fully dynamic PRA, I-PRA maintains a manageable number of accident scenarios (by using the existing plant PRA structure) while capturing the complex human-physics interactions using the spatiotemporal simulations of underlying failure mechanisms. I-PRA increases the realism of PRA by explicitly incorporating time and space into the underlying simulation models while avoiding significant changes to the existing plant PRA structure and their associated costs (e.g., peer review effort). I-PRA was applied for risk-informed resolution of safety concerns associated with Generic Safety Issue 191 (GSI-191) in a full-scope NPP pilot project [31,32] and for Fire PRA in a critical NPP fire-induced scenario [33].

Integrated PRA has fewer validation challenges than the fully dynamic PRAs. This is because I-PRA, unlike fully dynamic PRA, does not require significant changes to the plant PRA event tree (ET)/fault tree (FT) logic models that have already been validated by the plant and approved for use by regulatory agencies. Note that this does not mean to say that the quality validation of the PRA models (by plants) requires no methodological advancements (as there are ongoing works that focus on improving the validation of PRA ET/FT logic models). Here the authors only reasoned that the additional validation burden for plants when adopting I-PRA mainly relates to validation of the underlying simulation models, which is common for both I-PRA and fully dynamic PRAs. For this simulation model validation, the probabilistic validation (PV) methodology is leveraged within the I-PRA framework. In all previous I-PRA studies, the PV methodology was, however, not fully integrated into the probabilistic interface of the I-PRA framework to help validate and justify the use of the underlying simulation models; this is done for the first time in this paper for Fire I-PRA to validate fire simulation models in Fire I-PRA.

The remaining of this paper is organized as follows. Section 2 provides a brief review of the PV methodology and highlights its key features. Section 3 provides the Fire I-PRA computational platform framework with PV being an integrated feature in its probabilistic interface. Section 4 introduces the switchgear room fire case study and reports on results of a step-by-step implementation of the PV methodology for the case study. Finally, Section 5 provides concluding remarks.

2 Probabilistic Validation Methodology—An Overview

The PV methodology [34] is developed to assess the validity of some system response quantities (SRQs) of interest, predicted by the simulation model being questioned and are tied to a specific application. The PV methodology provides a comprehensive validation approach, that is, applicable for the whole spectrum of validation data availability, i.e., ranging from situations in which no empirical validation data associated with the SRQs of interest are available to those situations where a sufficient amount of such data exists. The PV methodology has a unique combination of characteristics, which are briefly summarized below. For details on the theoretical foundation and methodological platform of the PV methodology, as well as a comprehensive explanation of each characteristic, readers are referred to Part 1 of this series [34].

  • Characteristic #1 (Module A in Fig. 1 ): The PV methodology offers a multilevel, multimodel-form validation analysis that can integrate data and uncertainty analysis at multiple levels of the system hierarchy to support the degree of confidence evaluation.

  • Characteristic #2 (Module A in Fig. 1 ): The PV methodology separates aleatory and epistemic uncertainties and, when possible, differentiates between two forms of epistemic uncertainty (i.e., statistical variability and systematic bias) while considering their influence on the uncertainty in the simulation prediction.

  • Characteristic #3 (Module B in Fig. 1 ): The PV methodology uses a risk-informed acceptability criteria, along with a predefined guideline, to evaluate the acceptability of the simulation prediction.

  • Characteristic #4 (Module C in Fig. 1 ): The PV methodology combines uncertainty analysis with a two-layer sensitivity analysis to streamline the validity assessment and to efficiently improve the degree of confidence in the simulation prediction.

  • Characteristic #5 (Module A in Fig. 1 ): The PV methodology is equipped with a theoretical causal framework that supports the comprehensive identification and traceability of uncertainty sources influencing the uncertainty in the simulation prediction.

Fig. 1
Probabilistic validation methodological platform
Fig. 1
Probabilistic validation methodological platform
Close modal

It is important to note that Characteristics #3 and #5 are uniquely developed for the PV methodology, while Characteristics #1, #2, and #4 can be found in some of the current studies, as discussed in Ref. [34]. The integration of these characteristics under one methodology is a unique contribution of the PV research.

The PV methodological framework is summarized in Fig. 1, which was adapted from Ref. [34]. The PV methodology leverages formal uncertainty analysis and advances the scientific usage of epistemic uncertainty and acceptability criteria to facilitate the validity evaluation for simulation predictions, especially when validation data are not sufficiently available. In particular, the validity of a simulation prediction is determined in the PV methodology by [34]: (1) the magnitude of epistemic uncertainty, representing the degree of confidence, in the SRQs, calculated using a comprehensive uncertainty analysis that can quantify and aggregate all dominant sources of epistemic uncertainty involved in the development and usage of the simulation model; and (2) result of an acceptability evaluation that determines whether the total uncertainty (including both aleatory and epistemic uncertainties) associated with the SRQs and the corresponding degree of confidence are acceptable for the specific application of interest. Consequently, insights from the validity assessment in PV can help decision-makers determine whether the current degree of validity needs to be improved and, if necessary, guide them in prioritizing their available resources for improving the current degree of validity in the simulation prediction [34].

3 Computationalizing the Probabilistic Validation Methodology Embedded in the Fire Integrated Probabilistic Risk Assessment Framework

To help validate the underlying fire simulation models used in PRA, the PV methodology is fully embedded into the probabilistic interface of the Fire I-PRA framework (Fig. 2). The whole Fire I-PRA framework is then computationalized for Fire PRA applications.

Fig. 2
Fire integrated PRA framework with the embedded PV methodology
Fig. 2
Fire integrated PRA framework with the embedded PV methodology
Close modal

Fire I-PRA [25] is a multilevel risk analysis framework for analyzing layers of causal chains initiated by an internal fire that could cause a system-level initiating event (e.g., “IE” in Fig. 2), break down defense-in-depth barriers that act against the progression of fire-induced damage to safety-critical systems (e.g., systems “A” and “B” in Fig. 2), and lead to core damage. In Fire I-PRA, the fire simulation module (FSM; “b” in Fig. 2) includes models of physical failure mechanisms associated with fire-induced PRA scenarios. The plant-specific PRA module (“e” in Fig. 2) consists of static ETs and FTs that are obtained from the existing plant PRA and are tailored to synchronize with the scope of the FSM. The core of the FSM is the fire progression model (#2 in Fig. 2) that simulates the spatiotemporal evolution of fire-induced conditions. This fire progression model can be developed using different types of fire simulation models (e.g., CFAST zone model [35], computational fluid dynamics-based FDS model [36]). Inputs to the fire progression model usually include: (i) information on the fire source such as the heat release rate (HRR) curve, size and shape of the fire provided by the fire initiation model (#1 in Fig. 2); (ii) initial and boundary conditions; and (iii) material properties, such as electrical cables and combustibles. The manual fire suppression module (“c” in Fig. 2) models the firefighting performance of first responders and fire brigade. An explicit, bidirectional coupling between the FSM and the manual fire suppression module captures complex interactions between the fire evolution and the manual firefighting performance and predicts key performance measures (KPMs) associated with fire-induced equipment damage, e.g., maximum temperature inside a target cable jacket (denoted by TCB in Fig. 2) and maximum heat flux at the surface of that cable jacket (qCB in Fig. 2). The FSM and manual fire suppression module are integrated with the plant PRA model via a probabilistic interface, i.e., the interface module (“d” in Fig. 2). The PV methodology (#6 in Fig. 2) is an essential feature of this interface to help validate these simulation modules, alongside dependency treatment (#5 in Fig. 2) and Bayesian updating (#7 in Fig. 2).

Inside the interface module, uncertainties associated with the KPMs are obtained by applying the PV methodology (#6 in Fig. 2) for simulation models at the FSM and manual fire suppression module level. In particular, all sources of aleatory and epistemic uncertainties associated with the input parameters, model forms, and numerical approximations of these simulation models are accounted for in the quantification of the KPM uncertainty p-boxes. By considering damage thresholds associated with these physical KPMs, fire-induced equipment damage probabilities (e.g., cable damage probabilities, represented by the event “CBD” in Fig. 2) and their uncertainties can be estimated. Meanwhile, the post-fire damage propagation model (#3 in Fig. 2) provides the conditional probabilities of the component-level failure (e.g., spurious actuation of certain equipment), given the fire-induced cable damage. The fire-induced cable damage probabilities and the conditional probabilities of the component-level failure (given the fire-induced cable damage) are then plugged into the scenario-based damage model (#4 in Fig. 2) to develop component-level failure probabilities, e.g., Pr(BE3) in Fig. 2. Uncertainties associated with these component-level failure probabilities can also be obtained by applying the PV methodology (#6 in Fig. 2) for models at the component level (i.e., the scenario-based damage model in Fig. 2). From that, PRA minimal cut sets, derived from the plant-specific PRA ETs and FTs, can be quantified with consideration of dependent failures (#5 in Fig. 2) and plugged into the PRA model. In case additional empirical data are available, those data can be integrated with the existing simulation-based data using a Bayesian approach (#7 in Fig. 2) to update the results obtained in the interface module.

To operationalize the PV methodology in the Fire I-PRA framework, the present research has developed an automated computational platform, using Python programing language, that can integrate the plant PRA scenarios, underlying fire simulations, and features of the interface module of I-PRA (including the PV methodology) into a “unified” framework. In this Fire I-PRA computational platform, the FSM (“b” in Fig. 2), developed using CFAST [37] or FDS [38] simulation software, is coupled with the RAVEN toolkit [39]. RAVEN, developed by the Idaho National Laboratory, provides a statistical analysis platform capable of interfacing with FSM and is used for driving fire simulation jobs [40,41]. Figure 3 shows existing features that have been coded into the computational platform for PV in Fire I-PRA.

Fig. 3
Computational platform for PV in Fire I-PRA
Fig. 3
Computational platform for PV in Fire I-PRA
Close modal

Computational procedures for key features of this PV computational platform are provided in the subsections that follow. Note that several features of the I-PRA interface module, such as dependency treatment and minimal cut set quantification, have already been integrated into the Fire I-PRA computational platform [25,33,41]. In another work [42], the manual fire suppression module, developed using a combination of agent-based modeling (in NetLogo modeling environment) and data-driven approaches, has also been coupled with the FSM-RAVEN interface to explicitly capture the spatiotemporal, bidirectional interactions between fire progression and manual fire search and suppression. In that work, the PV methodology was, however, not yet fully embedded into Fire I-PRA [42].

3.1 Computational Procedure for Morris Elementary Effect Analysis in the Probabilistic Validation Computational Platform.

This section provides computational steps for calculating the sensitivity measures μ and σ for the Morris elementary effect (EE) analysis in the PV computational platform (Fig. 3). The mathematical background of the Morris EE analysis was discussed in Sec. 4.1.3 of Ref. [34]. The computational steps for the Morris EE analysis are implemented in the PV computational platform by leveraging an open-source Python library for sensitivity analysis, namely, SALib [43], and the FSM-RAVEN coupling, as highlighted below:

  • (i)

    Conduct steps (i-a) and (i-b) below for N times. Here N is the sample size of the Morris EE analysis (i.e., the number of model evaluations as explained in Sec. 4.1.3 of Ref. [34]).

    1. SALib generates a realization of the uncertain input parameters (of a simulation model of interest) from their associated distributions using the Morris sampling strategy [44,45].

    2. The generated realization is then fed into RAVEN to drive the FSM simulation model for computing the model output, e.g., the KPMs of interest such as TCB and qCB when considering the fire progression model (#2 in Fig. 2).

  • (ii)

    RAVEN collects the results obtained from all N realizations and send them to SALib for calculating the EE for each input parameter using Eq. (5) in Sec. 4.1.3 of Ref. [34].

  • (iii)

    SALib estimates the sensitivity measures μ and σ for each input parameter using Eqs. (6) and (7) in Sec. 4.1.3 of Ref. [34].

Note that, the FSM-RAVEN interface has been fully integrated into the Fire I-PRA framework (Fig. 3) and, therefore, the Morris EE analysis in PV can generate these sensitivity measures μ and σ with respect to the impact of each input parameter on model outputs obtained at multiple levels of causality in the I-PRA framework. In other words, the sensitivity measures can be generated at the fire simulation output level (i.e., to capture the impact of each input parameter on the physical KPMs such as TCB and qCB), at the level of equipment damage (i.e., capture the impact of each input parameter on the fire-induced equipment damage probability), or at the level of PRA outputs (to capture the impact of each input parameter on the plant risk metrics, e.g., core damage frequency) [41].

3.2 Computational Procedure for Input Parameter Uncertainty Propagation in the Probabilistic Validation Computational Platform.

This section provides computational steps for separately propagating aleatory and epistemic input uncertainties in the PV computational platform (Fig. 3). For this procedure, a double-loop Monte Carlo sampling strategy (Fig. 4) is leveraged in a nested iteration fashion, i.e., epistemic uncertain inputs are first sampled on the outer loop and then, for each sample set of uncertain epistemic inputs, aleatory uncertain inputs are sampled on the inner loop. Key steps of this nested double-loop Monte Carlo simulation approach (Fig. 4) are as follows:

Fig. 4
Nested (double-loop) Monte Carlo sampling strategy for separately treating aleatory and epistemic uncertainties in step 6 of the PV methodology
Fig. 4
Nested (double-loop) Monte Carlo sampling strategy for separately treating aleatory and epistemic uncertainties in step 6 of the PV methodology
Close modal
  • (i)

    Conduct steps (i-a), (i-b), and (i-c) below for Nq times. Here Nq is the sample size of the outer loop of the double-loop Monte Carlo simulation.

    1. (i-a)

      Use the Latin hypercube sampling (LHS) method to draw a sample e(q)=[e1(q),e2(q),,enE(q)] of nE input epistemic uncertainties.

    2. (i-b)

      For each sample of epistemic uncertainties generated in step (i-a) above, conduct steps (i-b-1) and (i-b-2) below for Nr times. Here Nr is the sample size of the inner loop of the double-loop Monte Carlo simulation.

      1. Use the LHS method to draw a sample a(r)=[a1(r),a2(r),,anA(r)] of nA input aleatory uncertainties. This sampling belongs to the inner loop of the double-loop Monte Carlo simulation.

      2. RAVEN runs the FSM simulation model with the sampled values obtained in steps (i-a) and (i-b-1) above to calculate a point estimate for the KPM of interest, Yi,j,k(r,q)=Mi,j,k(a(r),e(q)), where Mi,j,k denotes the FSM simulation model of interest.

    3. (i-c)

      RAVEN collects the results obtained from all Nr samples of the inner loop (i.e., Nr samples of the input aleatory uncertainties). These results are then used to generate an empirical cumulative distribution function (cdf) for the KPM of interest, denoted as FYi,j,k(Nr,q).

  • (ii)

    RAVEN collects the results obtained from all Nq samples of the outer loop (i.e., Nq samples of the input epistemic uncertainties). These results are in the form of a family of Nq empirical cdf curves of the KPM of interest and are used to generate a p-box for the KPM, denoted as [F_Yi,j,k,F¯Yi,j,k].

Note that the nested, double-loop Monte Carlo procedure presented in Fig. 4 may be calculation-consuming. In such case, one may need to consider a more efficient procedure such as the one suggested by Hu et al. [46] in which the Monte Carlo sampling is still employed in the outer loop for handling epistemic uncertainties, yet the inner loop is equipped with some optimization process.

For the procedure in Fig. 4, note that convergence studies should be performed to select sufficient sample sizes for both the inner and outer loops (Nr and Nq) to guarantee the consistency of the simulation results. Additionally, note that pure aleatory or pure epistemic uncertainty sources (represented by precise probability distributions or intervals, respectively) can be directly sampled in the inner and outer loops of the double-loop Monte Carlo simulation, respectively. For a mixed aleatory-epistemic uncertainty source, that is, represented by a p-box, the epistemic uncertainty portion of the p-box, represented by the intervals of its distributional parameters (e.g., α and β values of a Gamma distribution p-box), is directly sampled in the outer loop of the Monte Carlo simulation. However, the aleatory uncertainty portion of the p-box, reflected by the selection of the type of the distribution family, is not explicitly represented by a random variable and, therefore, does not facilitate a direct sampling. To explicitly model this aleatory uncertainty, a method that leverages the inverse transform sampling rule is used, as detailed in Ref. [47]. Following this method, an auxiliary variable, that is, uniformly distributed on [0, 1] is introduced to capture the aleatory uncertainty portion in the p-box of interest and this facilitates the direct sampling of aleatory uncertainties in the inner loop of the double-loop Monte Carlo simulation.

3.3 Computational Procedure for Data-Driven Model-Form Uncertainty Characterization in the Probabilistic Validation Computational Platform.

This section provides steps for computationalizing Step 7A of the PV methodology [34] to quantify the model-form uncertainty associated with the FSM model of interest. The mathematical background of this quantification, which leverages the modified area validation metric developed in Ref. [48], has been discussed in Sec. 4.1.7.1 of Ref. [34].

Step 7A of the PV methodology [34] is a data-driven approach and, thus, requires empirical data that should ideally be obtained from model validation experiments whose design and execution must allow for: (i) capturing the essential physics of interest; and (ii) measuring all information required by the model to simulate the physics, including initial and boundary conditions, material properties, and system excitation, as well as measurements of the system response quantities of interest. Once these data are available, the simulation model can be setup to replicate these experimental conditions and the obtained simulation results can be used, together with the experimental results, to quantify the model-form uncertainty. Note that this approach can be used even when only a few data points from simulation results and/or validation experiments are available [48]. Key steps used to implement Step 7A in the PV computational procedure are as follows:

  • (i)

    Obtain the uncertainty p-box for the KPM of interest, [F_Yi,j,k,F¯Yi,j,k], from conducting the uncertainty propagation for input parameter uncertainties (Sec. 3.2).

  • (ii)

    Depending on the validation data, conduct step (ii-a) or step (ii-b) below:

    1. Construct the empirical cdf SYi,j,k that characterizes the uncertainty associated with the validation data using a nondecreasing step function given in Eqs. (12) and (13) in Sec. 4.1.7.1 of Ref. [34].

    2. Construct the uncertainty p-box [S_Yi,j,k,S¯Yi,j,k] which characterizes the mixed aleatory and epistemic uncertainties in the validation data for Yi,j,k, with SYi,j,k denoting the cdf associated with the data, bounded by S_Yi,j,k and S¯Yi,j,k.

  • (iii)

    Calculate the area validation metric which measures the mismatch between the model response FYi,j,k and the validation data SYi,j,k, denoted by d(FYi,j,k,SYi,j,k), using Eqs. (9)–(11) in Sec. 4.1.7.1 of Ref. [34].

  • (iv)

    Based on d(FYi,j,k,SYi,j,k), calculate the areas of regions where empirical validation data are larger and smaller than the model responses, denoted as d+ and d, respectively.

  • (v)

    Use quantities d+ and d to obtain the estimated range of the model form uncertainty using Eqs. (15) and (16) in Sec. 4.1.7.1 of Ref. [34].

3.4 Computational Procedure for Global Importance Measure Analysis in the Probabilistic Validation Computational Platform.

This section provides computational steps for estimating the sensitivity measure SiKPM for the Global IM analysis in the PV computational platform (Fig. 3). SiKPM is calculated using the pinching approach by Ferson and Tucker [49] to rank the epistemic uncertainties associated with the simulation input parameters with respect to their influence on the uncertainty in each of the KPMs of interest. General guidance on the implementation of this approach in the PV methodology has been discussed in Sec. 4.3.1 of Ref. [34]. Its computational procedure leverages a nested three-loop Monte Carlo simulation approach. Key steps of the nested three-loop Monte Carlo simulation approach used to implement the pinching approach are as follows. Note that, similar to Sec. 3.2, vectors of input aleatory and epistemic uncertainties are denoted as a=[a1,a2,,anA] and e=[e1,e2,,enE], respectively.

  • (i)

    Select one variable ei from nE input epistemic uncertainties.

  • (ii)

    For the selected variable ei, conduct Steps (ii-a) to (ii-e) below for nS1 times. Here, nS1 represents the sample size of the outer-most loop of the three-loop Monte Carlo simulation.

    1. (ii-a)

      Using the LHS method to draw a sample ei(q) from the associated distribution of the variable ei selected in step (i). Note that this sampling belongs to the first (outer most) loop of the three-loop Monte Carlo simulation.

    2. (ii-b) For each sample ei(q) obtained from Step (ii-a) above, conduct Steps (ii-b-1) to (ii-b-3) below for nS2 times. Here, nS2 represents the sample size for the middle loop of the three-loop Monte Carlo simulation.

      1. (ii-b-1)

        Use the LHS method to draw a sample ei(q)=[e1(q),,ei1(q),ei+1(q),,enE(q)] of the remaining input epistemic uncertainties. This sampling belongs to the second (middle) loop of the three-loop Monte Carlo simulation.

      2. (ii-b-2) For each sample ei(q) and ei(q) obtained from Steps (ii-a) and (ii-b-1) above, conduct Steps (ii-b-2.1) and (ii-b-2.2) below for nS3 times. Here, nS3 represents the sample size for this inner-most loop of the three-loop Monte Carlo simulation.

        1. (ii-b-2.1) Use the LHS method to draw a sample a(r)=[a1(r),a2(r),,anA(r)] of all nA input aleatory uncertainties. This sampling belongs to the third (inner-most) loop of the three-loop Monte Carlo simulation.

        2. (ii-b-2.2) RAVEN runs the FSM simulation model of interest with the sampled values of a(r) obtained in Step (ii-b-2.1) to calculate a point estimate for the KPMs of interest.

      3. (ii-b-3)

        RAVEN collects the results obtained from all nS3 samples of a(r) to generate an empirical cdf for the KPM of interest.

    3. (ii-c)

      RAVEN collects the results obtained from all nS2 samples of ei(q) to generate a family of nS2 empirical cdf curves for the KPM of interest. Generate a conditional p-box associated with these nS2 empirical cdf curves (conditioned on the sampled value of ei in Step (ii-a)).

    4. (ii-d)

      Calculate the area within each conditional p-box obtained in Step (ii-c), denoted as AeiKPM.

    5. (ii-e)

      Calculate the absolute difference between AeiKPM obtained in Step (ii-d) and the area within the p-box obtained from Step (ii) in Sec. 3.2 for the KPM of interest (which represents the magnitude of the epistemic uncertainty in the KPM due to input parameter uncertainties), denoted as AKPM: ΔAeiKPM=|AKPMAeiKPM|. Note that ΔAeiKPM represents the reduction in the KPM epistemic uncertainty if the input epistemic uncertainty ei is fixed at its sampled value obtained in step (ii).

  • (iii)

    RAVEN collects the results ΔAeiKPM obtained from all nS1 samples of ei(q) to calculate the mean value of ΔAeiKPM, i.e., E[ΔAeiKPM], from those nS1 values of ΔAeiKPM.

  • (iv)
    Calculate the sensitivity measure, SiKPM, for the input epistemic uncertainty ei selected in Step (i) with respect to its contribution to the uncertainty in the KPM of interest using the following equation:
    (1)
  • (v)

    Repeat Steps (i) to (iv) above with another input epistemic uncertainty ei until all nE input epistemic uncertainties are considered and their sensitivity measures SiKPM are calculated.

4 Applying Probabilistic Validation Methodology Embedded in the Fire Integrated Probabilistic Risk Assessment Computational Framework for a Nuclear Power Plant Fire Probabilistic Risk Assessment Case Study

This section applies the PV methodological steps (Fig. 1), embedded in the Fire I-PRA computational framework (Fig. 2) explained in Sec. 3, for a Fire PRA case study of NPPs. All fire simulation runs were done on the Illinois Campus Cluster at the University of Illinois at Urbana-Champaign.

4.1 Fire Probabilistic Risk Assessment Model.

A switchgear room fire at a pressurized water reactor is selected as a case study. This fire may damage the nearby control and power cables which are connected to safety-related components. The study assumes that fire-induced damage to some of these cables may cause a sustained hot short circuit and spurious opening of the pressurizer power operated relief valve (PORV). Spurious actuation of the pressurizer PORV is assumed to be sustained long enough (i.e., stuck-open PORV) to cause a Small Break Loss-of-Coolant Accident (SBLOCA). This fire-induced SBLOCA is considered as the initiating event in the hypothetical Fire PRA model. Figures 57 present the ET and FTs. Note that this PRA model is an example of the plant-specific PRA module in Fig. 2.

Fig. 5
Fire PRA event tree model used in the case study
Fig. 5
Fire PRA event tree model used in the case study
Close modal
Fig. 6
Fault tree for the high pressure injection system (HPIS) used in the case study
Fig. 6
Fault tree for the high pressure injection system (HPIS) used in the case study
Close modal
Fig. 7
Fault tree for the low pressure injection system (LPIS) used in the case study
Fig. 7
Fault tree for the low pressure injection system (LPIS) used in the case study
Close modal

Following the fire-induced SBLOCA, the reactor is assumed to be successfully tripped on a “low reactor coolant pressure” signal. Several safety systems can then be activated to mitigate the accident and bring the reactor to a safe shutdown condition (represented by the “OK” end state in Fig. 5). First, the high-pressure injection system (“HPIS” in Fig. 5) should be activated to inject borated water into the reactor coolant system and to control the reactor coolant inventory. This case study assumes that HPIS has two trains, each consists of a motor-driven pump and a motor-operated injection valve (Fig. 6). Successful operation of the HPIS, defined as both HPIS trains being able to inject water into the reactor vessel, would eventually bring the reactor to the safe shutdown condition (end state S1 in Fig. 5). In case the HPIS fails to deliver a sufficient amount of coolant into the reactor vessel, the low-pressure injection system (“LPIS” in Fig. 5) can be used to control the reactor coolant inventory if the primary system can be depressurized below the LPIS pump shutoff head before core damage begins. Depressurization of the primary system, which is partially done by the flow via the stuck-open PORV, can be supported by secondary cooling features. For example, secondary cooling can be achieved by delivering auxiliary feedwater to the steam generators and removing steam from the steam generators using their atmospheric dump valves. For simplicity, the depressurization of the primary system is modeled by the pivotal event “primary system depressurization (PSD)” in the event tree (Fig. 5) and is not further modeled with a supporting FT, assuming that its failure probability is known from available data. Failure to sufficiently depressurize the primary system within a specific time window would eventually lead to core damage (end state S4 in Fig. 5). In contrast, if the PSD successfully depressurizes the primary system within the available time window, followed by a successful operation of the LPIS, the reactor can be brought to the safe shutdown condition (end state S2 in Fig. 5). If the LPIS fails to deliver sufficient amount of water into the reactor vessel, core damage would occur (end state S3 in Fig. 5). This case study assumes that LPIS has two trains, each consists of a motor-driven pump and a motor-operated injection valve (Fig. 7). Successful operation of the LPIS is defined as both LPIS trains being able to inject water into the primary system. It is assumed that the control and power cables of the pump and valve for the LPIS Train #2 run inside the cable trays in the switchgear room and may be damaged by the fire. Fire-induced damage to these cables may fail the connected LPIS Train #2 pump and valve as modeled by “LPIS_MDP2_F” and “LPIS_MOV2_F” basic events in Fig. 7.

Note that features of the PV methodology to be demonstrated in this case study are not bounded by the complexity of the PRA model; however, for demonstration purpose, other relevant safety systems such as the residual heat removal system and the containment fan cooling system are not considered in this hypothetical model.

4.2 Fire-Induced Cable Damage Scenario of Interest.

This case study uses an NPP fire scenario from NUREG-1934 Scenario D, i.e., a motor control center panel fire in a switchgear room (Fig. 8) [50].

Fig. 8
Configuration of the fire compartment used in the case study (adapted from NUREG-1934 Scenario D [50])
Fig. 8
Configuration of the fire compartment used in the case study (adapted from NUREG-1934 Scenario D [50])
Close modal

The boundary walls are made of concrete with a 0.6-m thickness. Two electrical cabinets are placed in the fire compartment: one is assumed to be the fire ignition source, while the other is a potential target of fire-induced damage. In addition, three cable trays (A, B, and C in Fig. 8) are located near the ceiling. The cable trays are assumed to be filled with thermoset cables with crosslinked polyethylene insulation and neoprene jackets. These cables are modeled as cylinders with a diameter of 1.5 cm and have homogeneous physical properties using the thermally induced electrical failure model [51].

The cables inside the three cable trays A, B, and C may fail due to fire-induced temperature or radiant heat flux impact. For simplification, the maximum temperatures inside the cable jackets (TCB) of the three target cable trays are selected as three KPMs of interest for determining the status of these cables. If a KPM exceeds its corresponding failure threshold, the corresponding cables are assumed to fail immediately, resulting in subsequent failures of the connected equipment. Regarding the relationship between the target cables and the PRA equipment (i.e., the pressurizer PORV, the LPIS Train #2 pump and valve), this case study assumes that:

  • The pressurizer PORV control cable runs inside cable tray B; thus, fire-induced damage to cable tray B may cause a sustained hot short circuit and spurious opening of the pressurizer PORV, which may eventually cause a SBLOCA condition.

  • The LPIS_MDP2 power cable also runs inside cable tray B. Upon the fire-induced damage to cable tray B, the function of this pump may be lost.

  • The LPIS_MOV2 power cable runs inside cable tray A. Fire-induced damage to cable tray A may cause this motor-operated valve to fail to open (when needed).

These relationships are discussed in more detail in Sec. 4.4.11.2.

4.3 Realities of Interest and Validation Hierarchy.

Within the simulation validation scope of the case study, the system-level reality of interest is associated with the fire-induced failures of the motor-driven pump and motor-operated injection valve on LPIS Train #2 (LPIS_MDP2_F and LPIS_MOV2_F in Fig. 7) and the fire-induced spurious actuation of the pressurizer PORV (SBLOCA initiating event in Fig. 5). Figure 9 presents detailed hierarchy of the system-level fire simulation model considered in the case study. Documenting the system hierarchy is an important task to be done before conducting the PV methodology since this system hierarchy will help define the validation hierarchy of the PV methodology (i.e., loops i,j in Module A in Fig. 1). The system hierarchy in Fig. 9, viewed from the bottom up, is a sequence of relevant models representing realities of interest leading to the system-level one to be modeled. The element models at the lower hierarchical levels provide inputs to the ones at the higher levels, as representing by the arrows in Fig. 9.

Fig. 9
Hierarchy of the system-level fire simulation model considered in the case study
Fig. 9
Hierarchy of the system-level fire simulation model considered in the case study
Close modal
  • (Level 1—Model (1-1)) The fire initiation model captures the fire ignition process and determines the fire ignition frequency and fire characteristics (e.g., HRR curve, size, and shape) for the fire source of interest. These pieces of information are then provided as input to the fire progression model (Level 2). For fire ignition frequency, this case study uses a data-driven method [18] to estimate a generic (industry-wide) fire ignition frequency distribution based on the historical fire event database [19]. To develop the HRR profile, experimental data provided in NUREG/CR-6850 [3,4] are leveraged.

  • (Level 2—Model (2-1)) The automatic fire detection model analyzes the effectiveness of heat and/or smoke detectors. In this case study, for simplicity, it is assumed that the automatic fire detection system has failed prior to the fire occurrence and cannot detect the fire

  • (Level 2—Model (2-2)) The automatic fire suppression model captures the operation of suppression systems such as water-based, CO2, and Halon sprinklers. In this case study, following the assumption on failure of the automatic fire detection system, it is assumed that the automatic fire suppression system is not credited.

  • (Level 2—Model (2-3)) The fire-induced cable damage model simulates the spatiotemporal evolution of the fire and the fire-induced environmental conditions in the compartment. For the case study, CFAST, version 7.7.0 [52], a two-zone fire simulation software, is used to predict the physical KPMs associated with electrical cables of concern (i.e., maximum temperature inside a target cable jacket). Fire-induced cable damage probabilities can then be calculated by comparing these physical KPMs against the corresponding cable damage thresholds. Using CFAST, the whole fire compartment in Fig. 8 was considered the computational domain for the fire simulation and was divided into two adjacent compartments in CFAST: one for the low-ceiling area and one for the high-ceiling area. Each of the two adjacent compartments is then divided into two zones (or control volumes), i.e., an upper layer and a lower layer. CFAST considers the physical characteristics within each of these zones are homogeneous.

  • (Level 2—Model (2-4)) The circuit failure model analyzes circuit operation and functionality to determine equipment responses to fire-induced cable damage. This model includes quantification of circuit failure mode probability estimation. For the motor-driven pump and motor-operated valve in Train #2 of the LPIS, this case study uses a bounding assumption that if the control and/or power cables inside the cable trays are damaged due to the fire, the connected pump and valve will fail with certainty. Regarding the PORV spurious actuation, this case study leverages information given in Table 5-1 of NUREG/CR-7150 Volume 2 [15] which provides a probability distribution that represents the epistemic uncertainty associated with the conditional probability of spurious actuation, given the fire-induced damage to the associated cables.

  • (Level 3—Model (3-1)) The scenario-based damage model (Module #4 in Fig. 2) is an event-tree-based model that captures event sequences following the fire ignition. These event sequences include events that are associated with: (i) availability of automatic fire detection and suppression systems (top events “AD” and “AS” in Fig. 2, respectively); (ii) fire-induced cable damage (top event “CBD” in Fig. 2); and (iii) component-level failure due to circuit faults (e.g., spurious actuation) caused by fire-induced cable damage (top event “SPA” in Fig. 2). Events AD, AS, CBD, and SPA are modeled by the Level 2 element models in Fig. 9. Outputs of the scenario-based damage model are the system-level predictions of interest associated with the conditional failure probabilities of LPIS_MDP2_F and LPIS_MOV2_F and the conditional frequency of SBLOCA due to PORV spurious actuation, given fire occurrence. These quantities of interest are input to the plant-specific Fire PRA (Module e in Fig. 2).

The validation process in the PV methodology starts from the lowest level (model 1-1) and proceeds upward until it reaches the system-level simulation predictions of interest (model 3-1 outputs). This is represented by the two iterative loops i,j in Module A in Fig. 5-5. In practice, failure to validate models at lower levels in the validation hierarchy is, however, often the cause for subsequent model validation failures at higher hierarchical levels [53]. Thus, among the element models in Fig. 9, this case study chooses the fire-induced cable damage model (model 2-3) to demonstrate the detailed steps of the PV methodology. This choice of scope is reasonable from a practical viewpoint because, in Fire PRA of NPPs, fire progression is sometimes modeled using simulation tools such as CFAST, while fire initiation and circuit failure models are mainly data-driven.

4.4 Applying Module a of the Probabilistic Validation Methodology (Fig. 1): Uncertainty Screening, Characterization, Propagation, and Aggregation.

This section documents the results of implementing Module A of the PV methodology for the CFAST fire-induced cable damage model (Secs. 4.4.14.4.10) and the quantification of the uncertainties associated with the system-level predictions (Sec. 4.4.11), i.e., the conditional failure probabilities of LPIS_MDP2 and LPIS_MOV2, and the conditional frequency of the fire-induced SBLOCA due to the stuck-open pressurizer PORV.

4.4.1 Step 1 in Fig. 1: Theory-Based Analysis and Qualitative Screening of Causal Influencing Factors and Their Sources of Uncertainties.

This step defines the scope of uncertainty analysis for the CFAST fire-induced cable damage model. Note that, in relation to Fig. 1, this model is represented by model Mi,j,k where i=2 (i.e., the model is at level 2 of the system hierarchy), j=3 (i.e., the model is the third element model at level 2, as shown in Fig. 9), k=1 (i.e., this case study considers only one model form which is the collection of governing equations and modeling assumptions used in CFAST).

Substep 1.1: Uncertainty identification. This substep identified all relevant factors and their associated uncertainties that can influence the simulation prediction (fire-induced maximum temperature inside the cable jackets [TCB] of the three target cable trays) by considering the generic theoretical causal framework developed in Ref. [34] and the specific details in the development process of this simulation model. The uncertainty identification in this substep was supported by engineering judgments and a review of regulatory documents, academic publications, industry reports, and CFAST technical references.

Substep 1.2: Qualitative Screening. Substep 1.2 conducted a qualitative screening to define a practical scope for the uncertainty analysis for M2,3,1 by determining: (i) factors that would be considered implicitly; (ii) factors that would be considered explicitly and treated as fixed values (i.e., deterministic factors); and (iii) factors that would be considered explicitly and treated as uncertain values (i.e., nondeterministic factors). This qualitative screening was also supported by a literature review and engineering judgments from subject-matter experts at the Idaho National Laboratory, the Sandia National Laboratories, and the Socio-Technical Risk Analysis research laboratory [41]. This qualitative screening also considered unique characteristics and constraints of the modeling and simulation problem at hand (e.g., availability of supporting data, budget, time limitation, and required level of details of the analysis). The identified nondeterministic factors were further classified into three categories depending on whether they contribute to the uncertainty in the input parameters, the model form, or the numerical approximations in the CFAST model. Nondeterministic factors considered in these three uncertainty categories are listed and handled in the subsequent steps of the PV methodology.

Note that when sufficient resources are available, employing systematic methodologies, such as phenomena identification and ranking table (PIRT), can significantly enhance the comprehensiveness and objectivity of the uncertainty identification and screening process. PIRT is a robust tool for prioritizing phenomena based on their relative importance to a specific scenario. In the context of NPP fire modeling applications, PIRT has been effectively used to identify and rank important parameters and phenomena, thereby contributing to a more accurate and credible fire-induced risk analysis. For instance, in an exercise conducted by the U.S. Nuclear Regulatory Commission (NRC) in 2008 [54], a PIRT was used to identify and rank phenomena associated with fire modeling applications in NPPs. Various fire scenarios, such as a main control room fire, were evaluated with a specific goal to be achieved (figure of merit), such as predicting the time to operator abandonment, and then ranked based on their relative importance [54]. In another exercise carried out in 2012, PIRT was employed in a joint assessment of cable damage and fire effects quantification, identifying parameters that can influence the hot short-induced failure modes of electrical control circuits after cables are damaged by fire [16].

4.4.2 Step 2 in Fig. 1: Approximate Characterization of Uncertainties Associated With Input Parameters of Model M2,3,1.

This step analyzed the nondeterministic factors that contribute to the uncertainties in the CFAST input parameters and characterized these input parameter uncertainties using available fire experiment and test data. In general, input parameter uncertainty consists of two components: (i) uncertainty associated with structural assumption, namely, the selection of a parametric distribution type and (ii) uncertainty associated with parameter estimation, i.e., the data-driven statistical estimation of the distributional parameters. The accuracy of the probability distribution type and parameter estimation can be influenced by three main factors: quality and quantity of data, data relevancy, and the uncertainty characterization method.

  • Data quality refers to the accuracy and reliability of the data that describes the input parameter. One typical factor that can affect the data quality is measurement error. Meanwhile, data quantity refers to the amount of data available to estimate the input parameters. More qualified data available for the input parameter estimation would reduce the uncertainty associated with the selection of the distribution type and the estimation of the distributional parameters.

  • Data relevancy refers to the applicability of the collected data for the specific system being analyzed. For instance, if the data used to estimate input distribution parameters were collected under conditions different from the system and scenario being analyzed, uncertainty induced by nonrepresentativeness of the data sources should be addressed.

  • The uncertainty characterization method refers to the statistical analysis method used to estimate the probability distributions, e.g., maximum likelihood estimation, Bayesian parameter estimation, and goodness-of-fit test.

Within the limited scope of the case study, nine CFAST uncertain input parameters were identified from Substep 1.2 and their uncertainty distributions have been derived in this Step 2 using the maximum likelihood estimation parameter estimation method and tested with the Kolmogorov–Smirnov goodness-of-fit test. These input parameters and their distributions are reported in Table 1 below [41]. The probability distributions for these nine CFAST input parameters were derived from experimental data that are available in the literature, including previous experimental efforts done by or collected and studied by the U.S. NRC in several projects (as documented in those NUREG documents listed in Table 1). In addition, all data and distributions listed in Table 1 were studied by the authors and several fire experiment experts from Sandia National Laboratories during a DOE-sponsored project and were published in a technical report [41]. Note that, due to the lack of data on fire location, this work assumes that the location of the fire source is uniformly distributed along the length of the ignition cabinet.

Table 1

CFAST input parameters treated as random variables and their uncertainty distributions [41]

IDInput parametersProbability distributionSources
Input parameters associated with the HRR profile
X1Peak HRR (kW)Gamma (α = 0.36, β = 57)NUREG-2178 [22]
X2Time to peak HRR (minutes)Uniform (4, 18)Sakurahara et al. [25], based on experimental data provided in Appendix G of NUREG/CR-6850 Volume 2 [4]
X3Steady burning at peak HRR (minutes)Triangular (0, 0, 20)Sakurahara et al. [25], based on experimental data provided in Appendix G of NUREG/CR-6850 Volume 2 [4]
X4Time to decay (minutes)Uniform (10, 30)Sakurahara et al. [25], based on experimental data provided in Appendix G of NUREG/CR-6850 Volume 2 [4]
Input parameters associated with concrete material properties
X5Thermal conductivity of concrete (W/(m K))Gamma (α = 30.148, β = 0.099675)Based on experimental data in Ref. [55]
X6Specific heat of concrete (J/(kg K))Gamma (α = 128.389, β = 6.33134)Based on experimental data in Ref. [55]
X7Density of concrete (kg/m3)Gamma (α = 30.4134, β = 83.5117)Based on experimental data in Ref. [55]
Input parameters associated with thermoset cable material properties
X8Cable jacket thickness (mm)Gamma (α = 17.2389, β = 0.077989)Based on empirical data provided in NUREG/CR-6931 [6]
Input parameters associated with fire source
X9Fire location (m)Uniform (2.15, 6.35)Based on engineering judgment. Values are coordinates of the fire source along the length of the ignition cabinet.
IDInput parametersProbability distributionSources
Input parameters associated with the HRR profile
X1Peak HRR (kW)Gamma (α = 0.36, β = 57)NUREG-2178 [22]
X2Time to peak HRR (minutes)Uniform (4, 18)Sakurahara et al. [25], based on experimental data provided in Appendix G of NUREG/CR-6850 Volume 2 [4]
X3Steady burning at peak HRR (minutes)Triangular (0, 0, 20)Sakurahara et al. [25], based on experimental data provided in Appendix G of NUREG/CR-6850 Volume 2 [4]
X4Time to decay (minutes)Uniform (10, 30)Sakurahara et al. [25], based on experimental data provided in Appendix G of NUREG/CR-6850 Volume 2 [4]
Input parameters associated with concrete material properties
X5Thermal conductivity of concrete (W/(m K))Gamma (α = 30.148, β = 0.099675)Based on experimental data in Ref. [55]
X6Specific heat of concrete (J/(kg K))Gamma (α = 128.389, β = 6.33134)Based on experimental data in Ref. [55]
X7Density of concrete (kg/m3)Gamma (α = 30.4134, β = 83.5117)Based on experimental data in Ref. [55]
Input parameters associated with thermoset cable material properties
X8Cable jacket thickness (mm)Gamma (α = 17.2389, β = 0.077989)Based on empirical data provided in NUREG/CR-6931 [6]
Input parameters associated with fire source
X9Fire location (m)Uniform (2.15, 6.35)Based on engineering judgment. Values are coordinates of the fire source along the length of the ignition cabinet.

Note: For a Gamma distribution, its probability density function is given below.

where Γ is the Gamma function, and α and β are the shape and scale parameters, respectively.

Ideally, the use of empirical data in any step of the validation process to support uncertainty characterization (e.g., PV Step 2 and Step 4 where empirical data would be used to characterize the input parameter uncertainties, PV Step 7A where empirical data would be used to characterize model-form uncertainty, and PV Step 9 where empirical data would be used for Bayesian updating model and submodel output uncertainties) must consider the data uncertainties and their associated contributing factors. It is important to emphasize that the use of data sources from publications such as NUREG reports does not guarantee absolute validity of the data; however, they do provide a credible starting point for our analysis. Due to the limited scope of the case study in this paper, the authors did not include a data uncertainty analysis for the empirical data used in this step. In a more comprehensive study, one should adequately consider the uncertainties associated with the data sources. As pointed out in Sec. 4.1.2 of Ref. [34], techniques and procedures are available in the literature for handling uncertainties associated with different types of empirical data, including expert judgment and experimental data [34].

4.4.3 Step 3 in Fig. 1: Quantitative Screening of Uncertainties Associated With Input Parameters of Model M2,3,1.

The quantitative screening in this step was conducted using the Morris EE analysis method [44] to reduce the dimension of the nondeterministic input space of the CFAST model. Among the CFAST input parameters in Table 1, those that have a negligible influence on the simulation predictions were “screened out.” In this case study, these screened-out parameters are treated as deterministic parameters in subsequent steps of the analysis. Those that are not screened out are kept as nondeterministic variables and their uncertainties are propagated to the model prediction level in Step 6 (Sec. 4.4.6). For the Morris EE analysis, three KPMs were selected as the simulation predictions of interest, i.e., maximum temperature inside the cable jacket (TCB) for each target cable tray (A, B, and C).

To conduct the Morris EE analysis, the state space of each input parameter Xh (where h={1,2,,9}) in Table 1 was discretized into a p-level grid (with p =4). The Morris EE analysis follows the computational procedure in Sec. 3.1 to calculate the EE values and the sensitivity measures, i.e., the sample means (μ) and the sample standard deviations (σ) of the EE distributions. In general, μ assesses the overall influence of an input parameter on the model output, while σ indicates the influences of nonlinearity and interactions among input parameters on the model output [44]. Values of both μ and σ should be considered simultaneously to reliably identify input parameters that have considerable influence on the model output [44]. Figure 10 illustrates the sensitivity results obtained for the nine CFAST input parameters with respect to their influence on KPM #1 (maximum temperature inside the cable jackets of cable tray A). In this figure, in the bar chart on the left, the right ends of the bars indicate observed point values of μ and the line segments (on the right end of the bars) represent the 95% confidence intervals around these point values. The point values of μ and σ obtained for all three KPMs of interest are reported in Table 2.

Fig. 10
Results of the Morris EE analysis for nine input parameters listed in Table 1 associated with KPM #1 (maximum temperature inside the cable jackets of cable tray A)
Fig. 10
Results of the Morris EE analysis for nine input parameters listed in Table 1 associated with KPM #1 (maximum temperature inside the cable jackets of cable tray A)
Close modal
Table 2

Results obtained from the Morris EE analysis for nine CFAST input parameters in Table 1 

X1X2X3X4X5X6X7X8X9
KPM #1μ510.3438.48027.2746.881−0.056−0.027−0.056−2.849−0.002
σ243.88112.97331.45911.6740.0600.0280.0584.4990.180
KPM #2μ88.8462.67310.4482.930−0.130−0.066−0.145−0.27274.070
σ112.6666.12218.1816.0410.2280.1080.2380.719108.407
KPM #3μ2.6240.2270.8240.392−0.084−0.046−0.091−0.0050.000
σ2.8040.3521.4090.6020.1620.0840.1700.0100.012
X1X2X3X4X5X6X7X8X9
KPM #1μ510.3438.48027.2746.881−0.056−0.027−0.056−2.849−0.002
σ243.88112.97331.45911.6740.0600.0280.0584.4990.180
KPM #2μ88.8462.67310.4482.930−0.130−0.066−0.145−0.27274.070
σ112.6666.12218.1816.0410.2280.1080.2380.719108.407
KPM #3μ2.6240.2270.8240.392−0.084−0.046−0.091−0.0050.000
σ2.8040.3521.4090.6020.1620.0840.1700.0100.012

Note: For associated names of the input parameters Xh, please refer to Table 1. KPMs #1, #2, #3 are maximum temperature inside the cable jackets of cable tray A, B, and C, respectively. Units of μ and σ are in °C.

In practice, to decide whether an input parameter can be screened out, one can apply specific screening criteria and a concept of a threshold area, designed based on both μ and σ, to separate influential input parameters from the noninfluential ones. The use of such screening criteria, accounting for both μ and σ, should facilitate the consideration of both the overall influence of the input parameter on the model output (represented by μ) and the influence of nonlinearity and interactions among input parameters on the model output (represented by σ). For example, the threshold area in this case study is assumed to be an area on the μ versus σ plot where μ and σ are less than 1% of the corresponding damage criteria (400 °C for the maximum temperature inside the cable jacket [7]). With the threshold area defined, one can design screening criteria such as the following hypothetical criterion that are used in this case study:

Screening criterion for the Morris EE analysis in the case study: If a specific input parameter falls within the noninfluential region (i.e., inside the threshold area) for all three KPMs, that input parameter is identified as a noninfluential factor in the fire simulation model. Otherwise, that input parameter is not considered a noninfluential factor (and cannot be screened out).

Using the screening criterion above for the sensitivity measures μ and σ, three parameters were identified as noninfluential parameters for all KPMs: X5 (thermal conductivity of concrete), X6 (specific heat of concrete), and X7 (density of concrete). These input parameters were screened out and, hence, would be treated as fixed/deterministic values in the subsequent steps using the median values obtained from their distributions in Table 1. Meanwhile, the remaining six input parameters, including X1, X2, X3, X4, X8, and X9, are kept as nondeterministic variables and their uncertainties are propagated to the model prediction level in Step 6.

Notes on sensitivity of the Morris EE analysis results: Results of the Morris EE analysis may be sensitive to several uncertain factors, such as the type and parameters of the probability distributions assigned to the CFAST input parameters in Table 1. For instance, during the literature review, it was noted that NUREG-2178 [22] and NUREG/CR-6850 [4] provide two different Gamma distributions that can be assigned for the peak HRR (i.e., input parameter X1) of electrical fires in multiple cable bundles with qualified cable inside vertical cabinets. In addition, while performing the Kolmogorov–Smirnov test to fit selected distributions to the experimental data provided in NUREG/CR-6850 [4] for input parameter X4 (i.e., time to decay in the HRR profile), two alternative distributions were not rejected and can be assigned for this parameter. Thus, to evaluate the sensitivity of the Morris EE analysis results to these uncertain factors, a sensitivity test was conducted in which the abovementioned alternative distributions associated with input parameters X1 and X4 were considered. In this test, distributions of the other CFAST input parameters were kept the same as reported in Table 1. Table 3 below lists four combinations of X1 and X4 distributions used in this sensitivity test.

Table 3

Combinations of X1 and X4 distributions used to test the sensitivity of Morris EE analysis results

Input parameter X1Input parameter X4
Test IDDistributionSourceDistributionSource
1Gamma (α = 0.36, β = 57)NUREG-2178 [22]Uniform (10, 30)Distributions obtained using experimental data provided in NUREG/CR-6850 [4]
2Gamma (α = 0.7, β = 216)NUREG/CR-6850 [4]Uniform (10, 30)
3Gamma (α = 0.36, β = 57)NUREG-2178 [22]Triangular (10, 17, 30)
4Gamma (α = 0.7, β = 216)NUREG/CR-6850 [4]Triangular (10, 17, 30)
Input parameter X1Input parameter X4
Test IDDistributionSourceDistributionSource
1Gamma (α = 0.36, β = 57)NUREG-2178 [22]Uniform (10, 30)Distributions obtained using experimental data provided in NUREG/CR-6850 [4]
2Gamma (α = 0.7, β = 216)NUREG/CR-6850 [4]Uniform (10, 30)
3Gamma (α = 0.36, β = 57)NUREG-2178 [22]Triangular (10, 17, 30)
4Gamma (α = 0.7, β = 216)NUREG/CR-6850 [4]Triangular (10, 17, 30)

Note: Distributions for the remaining input parameters are kept the same as reported in Table 1.

Results from this sensitivity test have shown that the screening result obtained from the Morris EE analysis does not change (i.e., X5, X6, and X7 are still the screened-out input parameters) when varying the combination of X1 and X4 distributions among those listed in Table 3.

Sampling size is another factor that can affect the sensitivity of the Morris EE analysis results. Sampling size in the Morris EE analysis is a function of the number of input parameters treated as random variables and the number of trajectories created through the input space [44]. By varying the number of trajectories, one would be able to test the impact of different sampling sizes on the Morris EE analysis results. Within the scope of this case study, a detailed convergence study on the Morris sampling size was not conducted; instead, based on a previous study [41], a sampling size of 5000 was selected for the Morris EE analyses throughout this case study.

4.4.4 Step 4 in Fig. 1: Detailed Characterization of Unscreened Sources of Input Parameter Uncertainty Identified in Step 3.

Based on the list of the unscreened input parameter uncertainties obtained in Step 3, Step 4 looks for additional data, if available, to refine the uncertainty characterization for the unscreened input parameters. Such additional data may include data collected from additional experiments, system operations, and subject domain expert judgment. In this uncertainty refining process, sources of aleatory and epistemic uncertainties, if coexist in an input parameter, would be separately considered (this is also consistent with common practices in several domains, including the nuclear [56] and aerospace [57] domains). Among the unscreened input parameters obtained in Step 3, the peak HRR (X1) is considered a mix source of aleatory and epistemic uncertainties. To facilitate a separate treatment of aleatory and epistemic uncertainties for input parameter X1, previous regulatory and industry research are leveraged. In NUREG/CR-6850, the peak HRR distribution of a fire ignition source is developed based on expert judgment and fire test data [4]. The expert panel first established an anticipated fire intensity value for the given fire ignition source and assigned it as representative of the 75th percentile fire intensity [4]. Then, they established another “high-confidence” fire intensity value expected to bound the vast majority of fires involving the given fire source and assign it as the 98th percentile fire intensity [4]. Consequently, the HRR distribution was developed using a two-parameter gamma distribution profile whose parameters were derived by matching the 75th and 98th percentiles [4]. In NUREG/CR-6850, peak HRR distributions are assigned to different electrical enclosure fire sources based on three factors: qualified versus unqualified cable, open versus closed enclosures, and single versus multiple cable bundles that could be ignited [4]. As compared to NUREG/CR-6850, NUREG-2178 [22] offers an updated classification of electrical enclosures based on their function, size, contents, and ventilation and, hence, derives new peak HRR distributions to reflect this updated classification. NUREG-2178, however, relies on the same sources of experimental fire test data and the same methodology used in NUREG/CR-6850 for deriving the peak HRR distributions [22].

To develop the p-box for the peak HRR, two bounding gamma distributions are considered. The first one is the Gamma (α = 0.36, β = 57) used in Table 1, taken from Table 7-1 in NUREG-2178 [22] for the “motor control centers and Battery Chargers” function group and thermoset fuel type. The α and β values of this distribution were derived based on the 75th and 98th percentile values of the fire intensity (25 kW and 130 kW, respectively) in NUREG-2178 [22]. Meanwhile, the second bounding gamma distribution for the peak HRR is derived by assuming that the two fire intensity values above (25 kW and 130 kW) correspond to the 50th and 95th percentile values (instead of 75th and 98th percentiles). By calculating α and β using the procedure in Appendix D.3 in NUREG-2178 [22], this assumption results in Gamma (α = 0.8, β = 50.26). Using these two bounding gamma distributions, a family of gamma distributions is obtained, i.e., Gamma p-box (0.36 ≤ α ≤ 0.8, 50.26 ≤ β ≤ 57), that represent the mixed aleatory-epistemic uncertainty associated with the peak HRR (X1). Uncertainties associated with the true values of α and β, represented by the two associated intervals, are the epistemic portion of the mixed uncertainty; meanwhile, selection of the gamma distribution family captures the aleatory uncertainty portion associated with the inherent variation of the peak HRR value.

Regarding the other unscreened input parameters (i.e., X2, X3, X4, X8, and X9), there was no additional information available to refine their uncertainties. However, in subsequent steps, the uncertainties associated with X2, X3, X4, and X8 were assumed to be sources of pure aleatory uncertainty while the uncertainty associated with X9 was assumed to be pure epistemic uncertainty. Results of the uncertainty refinement in this step are summarized in Table 4.

Table 4

Results of the detailed uncertainty characterization (Step 4) for the CFAST input parameters

IDInput parametersDetailed uncertainty
Mixed aleatory-epistemic uncertainty sources
X1Peak HRR (kW)Gamma p-box (0.36 ≤ α ≤ 0.8, 50.26 ≤ β ≤ 57)
Pure aleatory uncertainty sources
X2Time to peak HRR (minutes)Uniform (4, 18)
X3Steady burning at peak HRR (minutes)Triangular (0, 0, 20)
X4Time to decay (minutes)Uniform (10, 30)
X8Cable jacket thickness (mm)Gamma (α = 17.2389, β = 0.077989)
Pure epistemic uncertainty sources
X9Fire location (m)Uncertainty interval: [2.15; 6.35]
Screened-out input parameters treated as fixed values (median values from Table 1 distributions)
X5Thermal conductivity of concrete (W/(m K))Median value: 2.97184
X6Specific heat of concrete (J/(kg K))Median value: 810.76494
X7Density of concrete (kg/m3)Median value: 2512.0924
IDInput parametersDetailed uncertainty
Mixed aleatory-epistemic uncertainty sources
X1Peak HRR (kW)Gamma p-box (0.36 ≤ α ≤ 0.8, 50.26 ≤ β ≤ 57)
Pure aleatory uncertainty sources
X2Time to peak HRR (minutes)Uniform (4, 18)
X3Steady burning at peak HRR (minutes)Triangular (0, 0, 20)
X4Time to decay (minutes)Uniform (10, 30)
X8Cable jacket thickness (mm)Gamma (α = 17.2389, β = 0.077989)
Pure epistemic uncertainty sources
X9Fire location (m)Uncertainty interval: [2.15; 6.35]
Screened-out input parameters treated as fixed values (median values from Table 1 distributions)
X5Thermal conductivity of concrete (W/(m K))Median value: 2.97184
X6Specific heat of concrete (J/(kg K))Median value: 810.76494
X7Density of concrete (kg/m3)Median value: 2512.0924

Note that, as mentioned in Sec. 4.4.2, the use of empirical data to support uncertainty characterization requires proper handling of data uncertainties and their associated contributing factors (i.e., data quality and quantity, data relevancy, and uncertainty characterization method applied to the data). Due to the limited scope of the case study in this paper, the authors did not include a data uncertainty analysis for the empirical data used in this step. In a more comprehensive study, one should adequately consider the uncertainties associated with the data sources using techniques and procedures available in the literature such as those pointed out in Sec. 4.1.2 of Ref. [34].

Note also that the separation of aleatory and epistemic uncertainties may introduce dependencies among the uncertainty sources:

  • Inherent dependencies or correlations among the uncertainty sources (e.g., inherent dependencies among the input parameters Xi). In the case study, it is assumed that the input parameters are independent and thus, this type of dependency is not considered. Treatment of this type of dependency in PV is subject to an ongoing work [58] where the computational algorithm of the PV methodology is advanced to account for the correlated physical input parameters using the Spearman rank correlation coefficients method. The correlated parameters can then be sampled together (to preserve their correlation) using Latin hypercube sampling during the double-loop Monte Carlo sampling process in Step 6 of the PV methodology. This advancement, however, is not included in the present paper.

  • Dependency between the aleatory uncertainty and epistemic uncertainty portions in a mixed aleatory-epistemic uncertainty source. For example, input parameter X1 is modeled with a family of gamma distributions, i.e., Gamma p-box (0.36 ≤ α ≤ 0.8, 50.26 ≤ β ≤ 57). In this representation, the choice of gamma distribution represents the aleatory uncertainty, while the ranges of α and β distributional parameters represent the epistemic uncertainty. The dependency between the aleatory and epistemic portions of the uncertainty in this case is that the outcome of the choice of distribution type (i.e., gamma distribution) influences the outcome of the distributional parameters (α and β) when performing distribution fitting and parameter estimation. This dependency is inevitable and the solution, however, should be focused on reducing the magnitudes of the errors associated with both the choice of distribution and the parameter estimation processes. Reducing the error associated with the parameter estimation process is more straightforward as this can be done by gathering more data to allow a better fit of the distribution. Reducing the error associated with the choice of the distribution type, however, is more difficult. A potential solution for this problem would be to compute a probability of interest for all possible distribution models and assess the variability in the computed probability values. Another solution would be to parameterize the choice of the distribution type so that the uncertainty associated with this choice can be represented by the uncertainty in the parameter. In the current state of the PV methodology, these potential solutions have not been employed and are subject to future work.

4.4.5 Step 5 in Fig. 1: Characterization of Uncertainties Associated With Numerical Approximations for Model M2,3,1.

In the CFAST model, each modeling compartment is divided into two zones (or control volumes), i.e., an upper layer and a lower layer, and the physical characteristics within each zone are assumed homogeneous [52]. CFAST numerically solves the mass balance and energy conservation equations within the lower and upper layers with consideration of the ideal gas law and heat conduction into the walls [52]. CFAST also accounts for the momentum between zones in adjacent compartments using the horizontal or vent flow equations based on Bernoulli's law, but assumes that momentum within each zone is zero [52]. The conservation equations and the ideal gas law are used to derive a set of governing equations that predict the conserved quantities in each compartment, including the compartment pressure, upper layer volume, and upper and lower layer temperature [52]. CFAST also solves a heat transfer equation for each wall surface temperature (i.e., ceiling, upper wall, lower wall, floor). The governing equations used in CFAST are included in Appendix B of Ref. [52].

Differential equation-based models like CFAST rarely admit exact solutions for practical problems. Thus, CFAST uses approximate numerical solutions provided by the differential/algebraic solver DASSL [59,60] which solves the set of the governing equations by formulating it as a root finding problem. These approximate numerical solutions are, however, prone to numerical approximation errors, and the characterization of these errors is usually referred to as the model verification process. In general, these numerical errors may include discretization errors, iterative convergence errors, round-off errors, and errors due to computer programing mistakes. The solution procedure used in CFAST is included in Appendix C of Ref. [52].

Substantial efforts have been made to verify CFAST. For example, Volume 3 of the CFAST Technical Reference Guide [61] documented a verification study for CFAST that was conducted at the request of the U.S. NRC in accordance with ASTM E 1355 [62]. In addition, NUREG-1824 Volume 5 [35] documented analytical tests, code checking, and numerical tests to address the mathematical and numerical robustness of CFAST. CFAST is actively maintained by the Fire Research Division at the National Institute of Standards and Technology and, hence, numerical and programing errors found during verification activities have been fixed to an extent possible through updated versions of the code.

For the case study, numerical approximation uncertainties associated with the remaining numerical approximation errors in CFAST (those that may exist but have not been identified or eliminated) are treated as “implicitly considered factors” (a result of Substep 1.2). This means that the presence of these remaining numerical errors and their potential influence on the CFAST model prediction are acknowledged but would not be quantified in the uncertainty analysis. In other words, the final uncertainty associated with the CFAST-predicted KPMs of interest would be conditional on the “default” magnitude of the numerical approximation uncertainties caused by these remaining numerical errors.

4.4.6 Step 6 in Fig. 1: Propagation of Unscreened Input Parameter Uncertainties Through Model M2,3,1.

Following the results of Step 4, the uncertainty propagation in Step 6 needs to separately propagate aleatory and epistemic sources of uncertainty in the six unscreened input parameters (Table 4). This uncertainty propagation uses the computational procedure in Sec. 3.2.

Sampling size is a factor that can affect the consistency or sensitivity of the results. In the case study, due to limited time and resources, detailed convergence studies for the samplings in this step were not performed. Instead, based on insights from the authors' previous study [41] and exercises with CFAST fire models, a few LHS sampling sizes of the outer loop (for epistemic uncertainties) and the inner loop (for aleatory uncertainties) of the Monte Carlo simulation were tested to evaluate the consistency of the simulation results. Consequently, the sample sizes were selected as 100 and 200 for the outer and the inner loops, respectively, and were assumed to be sufficiently large. In practice, without prior knowledge, one should perform rigorous convergence studies to select sufficient sample sizes that can guarantee the consistency of the simulation results. For example, Bui et al. [42] combined the use of the replicated LHS method and bootstrap resampling method to perform convergence studies for the two loops of the nested Monte Carlo sampling strategy and to generate confidence intervals for the sampling-based simulation results representing the uncertainty associated with the selected sampling-based uncertainty propagation method. In the case study of the present paper, the sampling uncertainty is, however, treated as an implicitly considered factor (based on the qualitative screening in Substep 1.2) given that sufficiently large sample sizes have been selected for the two loops.

After performing the uncertainty propagation using the double-loop Monte Carlo simulation described in Sec. 3.2, a family of empirical cdf curves was obtained for each of the CFAST model predictions (i.e., the KPMs of interest). Note that, the empirical cdf curves for a specific KPM are constructed from the CFAST simulation results (for that KPM) by sorting the KPM outputs from lowest to highest and plotting them against their percentiles. The percentiles can be represented as the cdf values, as shown in Fig. 11 for KPM #1 (maximum temperature inside the cable jackets of cable tray A). By bounding these families of empirical cdf curves, empirical p-boxes were obtained for the KPMs of interest. These p-boxes represent the impact of both aleatory and epistemic input uncertainties on the CFAST model predictions.

Fig. 11
Empirical p-box of KPM #1 obtained from propagating input parameter uncertainties using the double-loop Monte Carlo simulation
Fig. 11
Empirical p-box of KPM #1 obtained from propagating input parameter uncertainties using the double-loop Monte Carlo simulation
Close modal

4.4.7 Step 7 in Fig. 1: Characterization of Model-Form Uncertainty Associated With Model M2,3,1.

Model-form uncertainty in CFAST arises from those assumptions, conceptualizations, abstractions, mathematical formulations, and approximations (other than numerical approximations which should be considered as numerical approximation uncertainties) that are introduced into the development of model. For example, CFAST assumes that two zones per compartment provide a reasonable approximation of the fire scenario being evaluated. In addition, CFAST does not explicitly solve the momentum equation, except for use of the Bernoulli equation for the flow velocity at vents. This is based on another assumption that the complete momentum equation is not needed to solve the set of equations associated with the model. Consequently, the temperature and gas concentrations are assumed to be constant throughout each zone and only change as functions of time. In CFAST, objects such as electrical cabinets are treated as probes to measure the conditions at that point, but do not have any effect on the environment in the compartment [52]. CFAST, therefore, cannot capture the shielding effects of the intervening medium as it adopts the point source calculation [52]. This simplification can add more uncertainty into the predictions of the fire-induced environmental conditions and the physical KPMs for the damage targets, especially when there is a large obstruction between the fire source and the damage target or when the compartment has a complex shape. Other assumptions in CFAST can be found in Sec. 1.5 in Volume 1 of the CFAST Technical Reference Guide [52].

CFAST has been verified and validated by the U.S. NRC for the fire scenario used in this case study (Appendix D of NUREG-1934 [50]). This means that the conditions for Step 7A (Path 1 going into Step 7A in Fig. 1) are met, i.e., an existing validation domain2 exists and the application domain3 of the simulation model of interest is enclosed within the validation domain. Thus, in this case study, Step 7A, which leverages the modified area validation metric approach [48], is used to characterize the model-form uncertainty associated with the CFAST. The implementation of this step for the case study uses the computational procedure in Sec. 3.3.

This case study used a high-resolution computational fluid dynamics-based fire simulation model built with FDS to generate “synthetic” experimental data for quantifying the model-form uncertainty associated with the CFAST model. The FDS model was setup consistently with the CFAST model for the same fire scenario and 20 synthetic sets of FDS model predictions were generated, each containing three prediction values for the three KPMs of interest. For generating these synthetic data, six input parameters (X1, X2, X3, X4, X8, and X9) were sampled from their distributions in Table 1 using LHS method.

Figure 12 illustrates the results of the model-form uncertainty quantification, obtained for KPM #1 (maximum temperature inside the cable jackets of cable tray A). In this figure, the FDS synthetic data points for the KPMs are plotted as the solid-line step function and the dashed-line p-box is the one obtained from the uncertainty propagation in Step 6. Values of d+ and d were calculated for regions where empirical data are larger and smaller than the model responses, respectively. Consequently, the model-form uncertainty (MFU in Fig. 12) was calculated. These calculations are done using the computational procedure in Sec. 3.3.

Fig. 12
Model-form uncertainty calculated with the modified area validation metric approach for KPM #1
Fig. 12
Model-form uncertainty calculated with the modified area validation metric approach for KPM #1
Close modal

Note that, similar to the discussion in Sec. 4.4.2, the use of synthetic data in this step to support the model-form uncertainty characterization requires proper handling of data uncertainties and their associated contributing factors. Due to the limited scope of the case study in this paper, the authors did not include a data uncertainty analysis for the synthetic data used in this step, but a more comprehensive study should adequately consider these uncertainties using techniques and procedures available in the literature such as those pointed out in Sec. 4.1.2 of Ref. [34].

4.4.8 Step 8 in Fig. 1: Estimating Total Uncertainty Associated With Model M2,3,1 Prediction by Aggregating Results Obtained From Steps 6 and 7.

In this step, the total uncertainty associated with each KPM of interest was obtained by appending the corresponding model-form uncertainty about the p-box obtained for that KPM in Step 6 (which capture the input parameter uncertainties). This process is feasible because model-form uncertainty is treated as pure epistemic. This appending was done using Eq. (17) in Ref. [34] with the numerical approximation uncertainty being skipped because it was treated as an implicitly considered factor (Step 5). In Fig. 13, the inner, dashed-line p-box obtained from the input uncertainty propagation in Step 6 and the outer, solid-line p-box obtained from appending model-form uncertainty about the inner p-box were illustrated for KPM #1 (maximum temperature inside the cable jackets of cable tray A).

Fig. 13
Visualization of the p-boxes associated with KPM #1 (maximum temperature inside the cable jackets of cable tray A)
Fig. 13
Visualization of the p-boxes associated with KPM #1 (maximum temperature inside the cable jackets of cable tray A)
Close modal

4.4.9 Step 9 in Fig. 1: Bayesian Updating the Uncertainty Associated With Model M2,3,1 Prediction.

As discussed in Sec. 4.1.9 of Ref. [34], the Bayesian updating feature in Step 9 of the PV methodology serves two goals:

  • When Path 1 or Path 2 in Fig. 1 is executed (i.e., the model-form uncertainty of model M2,3,1 is characterized using Step 7A or Step 7B, respectively), Step 9 accounts for the cumulative effect of sources of uncertainty that have not been addressed in the previous steps (e.g., uncertainty due to errors in the screening processes in Steps 1 and 3, uncertainty in selecting user-defined model features). This is done by using Bayesian updating to maximize the use of available empirical data (if any) and update the total uncertainty associated with the model prediction Y2,3,1 (obtained in Step 8).

  • When Path 3 is executed (i.e., the model-form uncertainty of model M2,3,1 cannot be reliably quantified using either Step 7A or Step 7B), Step 9 provides an alternative solution to model-form uncertainty quantification. This is done by Bayesian updating the uncertainty associated with the model M2,3,1 response/prediction Y2,3,1 (obtained in Step 6) with available empirical data (if any) in Step 9. This Path 3 bypasses Steps 7 and 8 and goes directly to Step 9 to generate a best estimate for the total uncertainty associated with Y2,3,1.

In Step 9, any use of additional empirical data associated with the element model M2,3,1 for Bayesian updating, similar to what was previously discussed, requires proper handling of data uncertainties and their associated contributing factors. The updating process and relevant Bayesian updating equations have been discussed in Sec. 4.1.9 of Ref. [34]. In the present case study, it is assumed that no additional information about the CFAST model is available and, therefore, the demonstration of Step 9 is out of scope in this paper.

4.4.10 Step 10 in Fig. 1: Aggregating Results Associated With Multiple Model Forms M2,3,1(k=1,2,,Ni,j,k) to Estimate the Uncertainty Associated With M2,3 Model Prediction.

In this case study, only one model form of the fire-induced cable damage model was considered, which is the CFAST model. Thus, there is no need to consider Step 10 for the element model M2,3,1, i.e., the final p-boxes for the three KPMs of interest remain the same as the ones obtained in Step 8 (e.g., for KPM #1, it is the outer p-box in Fig. 13). In practice, if multiple plausible model forms are considered for an element model, one of the approaches discussed in Sec. 4.1.10 of Ref. [34] can be leveraged to aggregate the results of these model forms and obtain the aggregated total uncertainty p-box for the model prediction.

4.4.11 Uncertainty Quantification for the System-Level Quantities of Interest.

This section presents the procedure to quantify the uncertainties associated with the system-level quantities of interest (i.e., the conditional failure probabilities of LPIS_MDP2, LPIS_MOV2, and conditional frequency of PORV-spurious-actuation-induced SBLOCA given fire occurrence), which are obtained from the system-level model in the system hierarchy in Fig. 9, i.e., the scenario-based damage model. First, the fire-induced cable damage probabilities, e.g., Pr(CBA|FR) and Pr(CBB|FR), and their uncertainties are quantified (Sec. 4.4.11.1) using (i) the uncertainties associated with the CFAST outputs, i.e., the KPMs of interest; and (ii) the uncertainties associated with corresponding cable failure thresholds. These results are then combined with the considerations and assumptions discussed in Sec. 4.3 regarding the remaining element models of the system hierarchy in Fig. 9 to quantify the scenario-based damage model and estimate the system-level quantities of interest and their uncertainties (Sec. 4.4.11.2).

4.4.11.1 Quantifying fire-induced cable damage probabilities and their uncertainties.

The conditional fire-induced cable damage probabilities (given fire occurrence) such as Pr(CBA|FR) and Pr(CBB|FR) can be calculated by comparing the KPM values (TCB of the three cable trays) against their corresponding failure thresholds. Here, CBA and CBB denote the failure events of cable trays A and B, respectively; FR denotes the occurrence of fire; and Pr() represents the probability of an event. Uncertainty associated with the cable thermal failure threshold, Tcrt, follows a lognormal distribution TcrtLN(μ=6.0704,σ=0.0872) (°C) for thermoset cables [63]. Note that the fire-induced cable damage probability calculation assumed that the cables fail immediately if TCB exceeds its corresponding thermal failure threshold. Key steps of this calculation process for each of the three cable trays are summarized below:

  • Generate random samples of the damage threshold from its distribution LN(μ=6.0704,σ=0.0872) using the LHS method. The qth sample of the damage threshold is denoted as Tcrt(q),q=1,2,,Nq. In this case study, Nq=10,000 represents a sufficiently large sample size in the Monte Carlo simulation for the damage threshold sampling.

  • For each sample Tcrt(q) of the cable damage threshold, use the appended p-box associated with TCB of the cable tray being considered to derive a cable failure probability interval:

  1. Identify the intersections between the appended TCB p-box and the vertical line representing the sampled Tcrt(q).

  2. Get the cdf values associated with the identified intersection points above. Take the complementary values of these cdf values as the lower and upper bounds of the cable failure probability interval.

  • Repeat Step (ii) for all Nq sampled values of Tcrt(q) to obtain Nq failure probability intervals. Each interval can be represented as a uniform distribution and, thus, Nq intervals can be represented as a family of uniform distributions.

  • Bounding the family of uniform distributions obtained above to construct a p-box that represents the uncertainty in the fire-induced cable damage probability.

In quantifying the PRA model (discussed later in Sec. 4.5.1), some of the minimal cut sets required fire-induced joint failure probabilities of multiple cables, such as Pr(CBACBB|FR), as input. This situation can be considered as one type of common cause failures (CCFs), where multiple redundant safety systems are challenged due to the shared cause. CCF analysis is challenging in PRA because the CCF events are rare and existing CCF event data are typically sparse, and these are even sparser for a specific NPP and its unique operating conditions. Such sparse data limit the use and the reliability of data-driven parametric CCF quantification methods commonly used in current NPP PRAs [64]. This study leveraged CFAST simulation results to directly calculate the fire-induced common-cause cable failure probabilities. This approach is consistent with the simulation-informed probabilistic methodology for CCF quantification introduced by Sakurahara et al. [64].

To quantify Pr(CBACBB|FR) and its uncertainty, for example, the following procedure was followed:

  • Introduce a new random variable, TCB(A,B)=min(TCB(A),TCB(B)), where TCB(A) and TCB(B) are the maximum temperatures inside the cable jackets of cable trays A and B, respectively.

  • Calculate Pr(CBACBB|FR) using the following equations:
    (2)
    where depending on the relationship between TCB(A) and TCB(B), we have
    (3)
  • Calculate the uncertainty p-box associated with PrCBACBB|FR) based on the uncertainty p-box of Pr(CBA|FR) or Pr(CBB|FR) accordingly, depending on the relationship between TCB(A) and TCB(B).

4.4.11.2 Quantifying probabilistic risk assessment equipment failure probabilities and their uncertainties.

Identifying the Fire PRA components and the associated cable raceways is a critical task in Fire PRA. Based on the assumptions made in Sec. 4.2, Table 5 below summarizes the relationships between the target cable trays A, B, C in the switchgear room fire scenario (Fig. 8) and the potentially impacted components of the hypothetical Fire PRA model (Figs. 57) considered in this case study.

Table 5

Mapping among the fire-induced damage targets and PRA equipment and events

Damage targetsImpacted PRA equipmentFailure modePRA eventPRA input
Cable tray ALPIS Train 2 motor-operated injection valve (LPIS_MOV2)Fail to openLPIS_MOV2_FPr(LPIS_MOV2_F)
Cable tray BLPIS Train 2 motor-driven pump (LPIS_MDP2)Loss of functionLPIS_MDP2_FPr(LPIS_MDP2_F)
Pressurizer PORVSpurious and sustained openingInitiating event (FireSBLOCA)fr(FireSBLOCA)(per year)
Damage targetsImpacted PRA equipmentFailure modePRA eventPRA input
Cable tray ALPIS Train 2 motor-operated injection valve (LPIS_MOV2)Fail to openLPIS_MOV2_FPr(LPIS_MOV2_F)
Cable tray BLPIS Train 2 motor-driven pump (LPIS_MDP2)Loss of functionLPIS_MDP2_FPr(LPIS_MDP2_F)
Pressurizer PORVSpurious and sustained openingInitiating event (FireSBLOCA)fr(FireSBLOCA)(per year)

Note: For notations of the events, please refer to Fig. 7.

To quantify the PRA inputs on the right-most column in Table 5, which are outputs of the scenario-based damage model (Module 3-1 in Fig. 9), following assumptions and considerations are applied to its lower-level element models:

  • The fire ignition frequency of the electrical cabinet, fr(FR) (1/year, is assumed to follow a lognormal distribution, LN(μ=4.12,σ=1.11), provided in NUREG-2169, Table 4-4 [18] for Bin 15 ignition frequency.

  • Automatic detection and automatic suppression are not credited in this case study, meaning that their failure probabilities are considered unity.

  • The conditional probability of the spurious actuation of the pressurizer PORV, given fire occurrence (FR) and fire-induced damage to cable tray B (CBB), follows a Beta distribution, Beta(α=2.58,β=2.92), taken from Table 5-1 in NUREG/CR-7150 Volume 2 [15]. This conditional probability is denoted as Pr(FireSBLOCA|CBB,FR). The initiating event is represented by the following Boolean expression:
    (4)
  • It is assumed that, if cable tray A is damaged (CBA) by the fire, “LPIS_MOV2_F” occurs with certainty, Pr(LPIS_MOV2_F|CBA,FR)=1. Note that the LPIS_MOV2_F event is represented by the following Boolean expression:
    (5)
  • It is assumed that, given fire-induced damage to cable tray B, the function of LPIS_MDP2 is lost with certainty, i.e., Pr(LPIS_MDP2_F|CBB,FR)=1. Note that the LPIS_MDP2_F event is represented by the following Boolean expression:
    (6)

Note that, because the uncertainties associated with Pr(CBA|FR) and Pr(CBB|FR), obtained from Sec. 4.4.11.1, are represented with uniform p-boxes, uncertainties associated with the PRA inputs on the left-hand side of Eqs. (4)(6) can also be represented with empirical p-boxes. These results are used to quantify the Fire PRA ET–FT model and estimate the core damage risk induced by the given fire (Sec. 4.5.1).

4.5 Applying Module B of the Probabilistic Validation Methodology (Fig. 1): Acceptability Evaluation.

In this case study, the acceptability evaluation is demonstrated at the application output level, i.e., the core damage risk estimated for the Fire PRA model. The plant risk acceptability criteria available in the regulatory documents, such as the ones in NUREG/CR-6850 [4] aimed for quantitative screening of fire scenarios, can be used. To facilitate this acceptability evaluation (Sec. 4.5.2), the total uncertainty associated with the core damage risk of the Fire PRA model is first calculated in Sec. 4.5.1.

4.5.1 Step 11 in Fig. 1: Quantifying the Total Uncertainty Associated With the Application Output of Interest.

This step calculated the risk scenarios associated with the core damage end states in the Fire PRA model (end states S3 and S4 in Fig. 5). The Boolean expressions for these two core damage scenarios were derived from the ET in Fig. 5 as follows:
(7)
(8)
In Eqs. (7) and (8), FireSBLOCA denotes the initiating event, HPIS, LPIS, and PSD, respectively, denote the HPIS, LPIS, and PSD failure events, and PSD¯ denotes the PSD success event. HPIS and LPIS can be further expanded by executing Boolean algebra on their FTs in Figs. 6 and 7, respectively, and as a result, their minimal cut sets are obtained as follows:
(9)
(10)

Plugging Eqs. (9) and (10) into Eqs. (7) and (8), one would then obtain the minimal cut sets for the core damage scenarios. A risk function was then derived based on these core damage minimal cut sets, denoted by R. The required inputs to this risk function R are summarized in Table 6.

Table 6

Failure data assumed for the hypothetical Fire PRA model

NameDescriptionProbability/frequencySource
FRFire ignition frequency of the electrical cabinet source, (1/year)LN(μ=4.12,σ=1.11)Table 4-4, NUREG-2169 [18]
FireSBLOCA|CBB,FRConditional spurious actuation of the pressurizer PORV given fire occurrence and fire-induced damage to cable tray BBeta(α=2.58,β=2.92)Table 5-1, NUREG/CR-7150 Volume 2 [15]
CBA|FR, CBB|FR, CBACBB)|FRConditional failures of cable trays A and B, and common cause failure of both cable trays given fire occurrenceEmpirical uniform p-boxes obtained in Sec. 4.4.11.1
FireSBLOCAFire-induced SBLOCA frequency (1/year)Empirical p-box obtained from solving Eq. (4)
LPIS_MOV2_FFailure of the LPIS Train #2 motor-operated injection valve due to fireEmpirical p-box obtained from solving Eq. (5)
LPIS_MDP2_FFailure of the LPIS Train #2 motor-driven pump due to fireEmpirical p-box obtained from solving Eq. (6)
PSDFailure to sufficiently depressurize the primary system within the required time window1.0 × 10−3Assumptions
HPIS_MDP1Failure of the HPIS Train #1 motor-driven pump0.03
HPIS_MDP2Failure of the HPIS Train #2 motor-driven pump0.03
HPIS_MOV1Failure of the HPIS Train #1 motor-operated injection valve0.05
HPIS_MOV2Failure of the HPIS Train #2 motor-operated injection valve0.05
LPIS_MDP1Failure of the LPIS Train #1 motor-driven pump0.03
LPIS_MDP2_NFFailure of the LPIS Train #2 motor-driven pump due to nonfire causes0.03
LPIS_MOV1Failure of the LPIS Train #1 motor-operated injection valve0.05
LPIS_MOV2_NFFailure of the LPIS Train #2 motor-operated injection valve due to nonfire causes0.05
NameDescriptionProbability/frequencySource
FRFire ignition frequency of the electrical cabinet source, (1/year)LN(μ=4.12,σ=1.11)Table 4-4, NUREG-2169 [18]
FireSBLOCA|CBB,FRConditional spurious actuation of the pressurizer PORV given fire occurrence and fire-induced damage to cable tray BBeta(α=2.58,β=2.92)Table 5-1, NUREG/CR-7150 Volume 2 [15]
CBA|FR, CBB|FR, CBACBB)|FRConditional failures of cable trays A and B, and common cause failure of both cable trays given fire occurrenceEmpirical uniform p-boxes obtained in Sec. 4.4.11.1
FireSBLOCAFire-induced SBLOCA frequency (1/year)Empirical p-box obtained from solving Eq. (4)
LPIS_MOV2_FFailure of the LPIS Train #2 motor-operated injection valve due to fireEmpirical p-box obtained from solving Eq. (5)
LPIS_MDP2_FFailure of the LPIS Train #2 motor-driven pump due to fireEmpirical p-box obtained from solving Eq. (6)
PSDFailure to sufficiently depressurize the primary system within the required time window1.0 × 10−3Assumptions
HPIS_MDP1Failure of the HPIS Train #1 motor-driven pump0.03
HPIS_MDP2Failure of the HPIS Train #2 motor-driven pump0.03
HPIS_MOV1Failure of the HPIS Train #1 motor-operated injection valve0.05
HPIS_MOV2Failure of the HPIS Train #2 motor-operated injection valve0.05
LPIS_MDP1Failure of the LPIS Train #1 motor-driven pump0.03
LPIS_MDP2_NFFailure of the LPIS Train #2 motor-driven pump due to nonfire causes0.03
LPIS_MOV1Failure of the LPIS Train #1 motor-operated injection valve0.05
LPIS_MOV2_NFFailure of the LPIS Train #2 motor-operated injection valve due to nonfire causes0.05

In calculating R, two inputs, i.e., fr(FR) and Pr(FireSBLOCA|CBB,FR), were assumed as sources of aleatory uncertainty. Meanwhile, Pr(CBA|FR), Pr(CBB|FR), and Pr(CBACBB|FR) were assumed as sources of epistemic uncertainties. With these considerations, aleatory and epistemic uncertainties in the PRA inputs were separately propagated through the core damage risk function using the double-loop Monte Carlo simulation approach, similar to the way CFAST input parameter uncertainties were propagated through the CFAST model in Sec. 4.4.6. The fire-induced core damage risk empirical cdf curves and their bounding p-box obtained for the fire scenario and the Fire PRA model being considered in this case study are shown in Fig. 14.

Fig. 14
Empirical cdf curves (left) and their bounding p-box (right) obtained for the fire-induced core damage risk metric in this case study
Fig. 14
Empirical cdf curves (left) and their bounding p-box (right) obtained for the fire-induced core damage risk metric in this case study
Close modal

The key steps to obtain the results in Fig. 14 using the double-loop Monte Carlo approach are as follows:

  • Generate samples from the empirical p-boxes associated with the three sources of epistemic uncertainty, Pr(CBA|FR), Pr(CBB|FR), and Pr(CBACBB|FR), obtained in Sec. 4.4.11.1. This is the outer loop of the double-loop Monte Carlo, with Nout=100 represents a sufficiently large sample size for the outer loop. Each sample obtained in this step consists of three single uniform distributions associated with the three sources of epistemic uncertainty, accordingly.

  • For each set of the three sampled uniform distributions obtained in step (i):

  • Generate a sample (point value) from each of the distributions associated with the aleatory input uncertainties, fr(FR) and Pr(FireSBLOCA|CBB,FR).

  • Generate a sample (point value) from each of the three sampled uniform distributions obtained in step (i).

  • Use the sampled values in steps (ii-a) and (ii-b) above and other point-valued inputs in Table 6 to calculate a point-value estimate of the core damage risk using the risk function R.

  • Repeat steps (ii-a), (ii-b), and (ii-c) for Nin=1000 times. Nin represents a sufficiently large sample size for the inner loop of the double-loop Monte Carlo simulation. This results in a set of 1000 values for the core damage risk.

  • Construct an empirical cdf of the core damage risk using the 1000 values obtained in step (ii-d) above.

  • Repeat step (ii) for Nout=100 times to obtain a family of 100 empirical cdf curves of the core damage risk.

  • Bounding the family of distributions obtained in step (iii) above to obtain a p-box that represents the total uncertainty associated with the fire-induced core damage frequency associated with the fire scenario and the Fire PRA model being considered.

4.5.2 Step 12 A in Fig. 1: Performing Acceptability Evaluation at the Application Output Level.

Once the fire-induced core damage frequency associated with the fire scenarios of concern is quantified, risk aggregation needs to be conducted to compute the updated total plant core damage frequency. This updated total plant risk encompasses all types of internal and external initiating events that are considered in a full-scope NPP PRA model. “Updated” here means that this is the total plant risk calculated with the fire-related simulation predictions (of interest) being some of its inputs, as compared to the original total plant risk estimate obtained without using those newly added fire simulation models. The updated plant risk can then be compared against the correspondent safety goals or regulatory requirements (e.g., the NRC's risk acceptance criteria in Regulatory Guide 1.174 [65]) to help evaluate the acceptability of the simulation predictions used as input to the plant PRA model. Satisfying the regulatory requirements would support the conclusion that the simulation predictions of interest and their current degree of confidence are acceptable for the Fire PRA application at hand. This is because, in such case, there is no need for a more realistic estimation of the simulation predictions (or other parts of the application model). As a result, the simulation predictions can gain adequate degree of confidence to support the considered application.

The Fire PRA application in this case study was only applied for one critical plant scenario (and not a full-scope plant PRA) and so it is not possible to obtain and analyze the total core damage frequency values. For simplicity, it is assumed that an acceptability criterion for the fire-induced core damage frequency associated with the fire compartment considered in this case study, derived from the quantitative screening criteria for single fire compartment analysis in Table 7-2 of NUREG/CR-6850 [4], be used. This is based on another assumption that the electrical cabinet fire source considered is the only major ignition source in this compartment. Following these assumptions, this criterion can be obtained such that the core damage frequency value associated with the fire compartment should be less than 1×107 (1/year) [4]. From the fire-induced core damage risk p-box obtained in Step 11 (Fig. 14), the uncertainty intervals associated with the 5-percentile and 95-percentile risk values can be extracted

By comparing the acceptability criterion and the uncertainty interval of the core damage frequency risk estimate, represented by [R0.05,R0.95], one would find that the criterion falls within the uncertainty interval. This result indicates that, though the validity of the fire simulation predictions cannot be established for the Fire PRA application of interest, there are chances to reduce the uncertainty interval and the updated uncertainty interval may satisfy the criterion. In this case, an importance ranking analysis can be utilized to identify the most significant contributing sources of epistemic uncertainty and inform decision-makers as to where resources should be prioritized to efficiently reduce the uncertainty and improve the validity of the simulation predictions.

4.6 Applying Module C of the Probabilistic Validation Methodology (Fig. 1): Global Importance Ranking and Validity Improvement.

This module helps identify options for gradually reducing the epistemic uncertainty and improving the validity of the simulation predictions up to a level that can be considered acceptable for the application of interest. As the fire simulation predictions are only some of the inputs to the Fire PRA model (and the full-scope plant PRA model), their epistemic uncertainty, from a practical viewpoint, should only be considered for reduction if it is identified as a significant contributor to the total uncertainty in the application output. In this case study, it is assumed that the epistemic uncertainty associated with at least one KPM of interest, predicted with the CFAST model, has been identified as a significant contributing factor to the total uncertainty for the PRA risk estimate. Step 13 of the PV methodology should then be performed to identify the most significant epistemic uncertainties contributing to the KPM uncertainty.

4.6.1 Step 13 in Fig. 1: Global Importance Ranking of the Potentially Dominant Sources of Epistemic Uncertainty.

This step provides an importance ranking analysis to rank and identify sources of epistemic uncertainty that contribute the most to the uncertainty in the KPMs of interest. This step has two following substeps.

Substep 13.1: Ranking the magnitudes of epistemic uncertainty. For each KPM of interest, this substep compares the portion of its epistemic uncertainty, that is, contributed by the CFAST model input parameters against the portion, that is, contributed by the model-form uncertainty. As an example, for KPM #1, this substep compares the area within the inner, dashed-line p-box in Fig. 13 (i.e., representing the magnitude of epistemic uncertainty associated with the input parameters of the CFAST model) against the area between this p-box and the outer p-box in Fig. 13 (which represents the magnitude of the model-form uncertainty)4. As can be seen in Fig. 13, for KPM #1, the epistemic uncertainty associated with the input parameters of the simulation model is the dominant contributor. This comparison can be repeatedly applied for the other KPMs of interest. The comparison results obtained in this case study indicated that an efficient strategy to reduce the uncertainty in the KPMs of interest should first target the epistemic uncertainty in the CFAST input parameters. This then motivates one to rank the contribution of the epistemic uncertainties in the input parameters to better inform decision-makers as to which input parameters should be prioritized to reduce the KPM uncertainty most efficiently. This is done in Substep 13.2 below.

Substep 13.2: Ranking the epistemic input uncertainties. This substep uses the pinching approach by Ferson and Tucker [49] to rank the input parameter epistemic uncertainties with respect to their influence on the uncertainty in each of the KPMs of interest. Using the auxiliary variable method [47] discussed in Sec. 3.2, an auxiliary variable, X1-Aux, that is, uniformly distributed on [0, 1], was introduced to explicitly represent the aleatory uncertainty in the Gamma p-box of X1. As a result, the list of aleatory uncertain input parameters considered in this importance ranking includes X2, X3, X4, X8, and the auxiliary variable X1-Aux. The epistemic uncertain input parameters that need to be ranked in this substep include X9 and the α and β parameters of the Gamma p-box of X1 (denoted as αX1 and βX1). Other input parameters to CFAST are kept as fixed values. See Table 4 (Sec. 4.4.4) for these input parameter uncertainties.

The sensitivity measures (SiKPM) for the three input epistemic uncertainties with respect to their impact on the three KPMs of interest, calculated using the computational procedure provided in Sec. 3.4, are reported in Table 7 below.

Table 7

SiKPMk results for the three input epistemic uncertainties with respect to their influence on the uncertainties in the KPMs of interest

Sensitivity measure, SiKPMk(%)
KPMkDescriptionαX1βX1X9
1Maximum temperature inside the cable jackets of cable tray A83.935.115.83
2Maximum temperature inside the cable jackets of cable tray B40.531.3477.31
3Maximum temperature inside the cable jackets of cable tray C65.482.789.34
Sensitivity measure, SiKPMk(%)
KPMkDescriptionαX1βX1X9
1Maximum temperature inside the cable jackets of cable tray A83.935.115.83
2Maximum temperature inside the cable jackets of cable tray B40.531.3477.31
3Maximum temperature inside the cable jackets of cable tray C65.482.789.34

The results in Table 7 show that αX1 is ranked as the most important source of input epistemic uncertainty when considering the KPMs associated with cable trays A and C. The epistemic uncertainty αX1 is also an important contributor to the uncertainties in the KPM associated with cable tray B (based on its corresponding sensitivity measure values), though parameter X9 is the most important epistemic uncertainty among the inputs when considering the KPM associated with cable tray B. This observation can be explained by the locations of the cable trays inside the fire compartment (Fig. 8). Cable tray A runs right above and along the ignition source cabinet; therefore, when the fire location (X9) is varied, the damage target associated with cable tray A is always right above the fire. Cable tray C is located far away from the ignition source; hence, the impact of the fire location, when it is varied along the ignition source cabinet, on the KPM associated with cable tray C is negligible. On the other hand, cable tray B is located close to the ignition source cabinet, installed at perpendicular angle. Therefore, the minimum distance from cable tray B to the ignition source, when the fire location is varied along the ignition source cabinet, also varies significantly. For these reasons, X9 has a large impact on the KPM for cable tray B, while it has a negligible impact on cable trays A and C.

The SiKPMk sensitivity measure, as can be seen from its calculation in Eq. (1) in Sec. 3.4, estimates the value of having additional information about the input epistemic uncertainty ei in terms of the percent reduction in the uncertainty of KPMk. Therefore, the SiKPMk results obtained in Step 13.2 can help inform decision makers as to which input epistemic uncertainty sources should receive prioritized additional research and experiment efforts to reduce their epistemic uncertainties. The importance of αX1 (the shape parameter of the Gamma distribution family of X1), shown in Table 7, helps explain recent efforts to refine and characterize the maximum HRR of electrical enclosure fires more realistically in the nuclear domain and Fire PRA practices [66]. On the other hand, the observed importance of X9 indicates that, in reducing the epistemic uncertainty associated with fire location, more thorough data collection may need to be conducted.

Note that, while not demonstrating in this case study, potential interaction effects among the input epistemic uncertainties can be considered by “pinching” several input epistemic uncertainties simultaneously. Note also that the importance ranking in this case study is with limited scope as it only focused on demonstrating the applicability of the SiKPMk sensitivity measure for ranking input epistemic uncertainties with respect to their influence on the simulation outputs. Future work will perform the ranking based on the influence on the uncertainty in the PRA risk estimate (i.e., application output of interest). Since a global importance ranking for a full-scope PRA model of an NPP would require a significantly larger computing resource, a more efficient computational procedure for calculating the sensitivity measure should be investigated.

4.6.2 Step 14 in Fig. 1: Collect New Data/Revise Model to Improve the Current Degree of Validity.

This step is reserved for validity improvement activities, details of which depend on the importance ranking in Step 13. In this case study, this step may first involve, for example, collecting new empirical data to reduce the epistemic uncertainty in αX1 and X9. After that, the uncertainty in the risk estimate can be updated and checked against the acceptability criterion. This process can be done in an iterative manner to gradually improve the validity of the simulation predictions up to a point that satisfies the acceptability requirements, while being cost-effective by prioritizing the limited resources available. Demonstration of this step, however, is not within scope of this case study.

5 Conclusions

In this work, the PV methodology [34] (Fig. 1) is computionalized and embedded in an I-PRA framework (Fig. 2) and applied for evaluating the validity of a hierarchical fire simulation model used in the setting of NPP Fire PRA. The step-by-step illustration and results of the case study have been discussed for a hypothetical Fire PRA model. The validity of a fire simulation model in this case study is determined by evaluating: (i) the degree of confidence in the simulation prediction, measured by the magnitude of its epistemic uncertainty; and (ii) the result of an acceptability evaluation that determines whether the total uncertainty (including both aleatory and epistemic uncertainties) associated with the simulation prediction and the corresponding degree of confidence are acceptable for the Fire PRA application of interest. A comprehensive uncertainty analysis, embedded in Module A of the PV methodology (Fig. 1), was used to identify, characterize, propagate, and aggregate all dominant sources of aleatory and epistemic uncertainties contributing to the uncertainty in the fire simulation prediction, and are combined with other sources of uncertainty in the hypothetical Fire PRA model to quantify the uncertainty in the Fire PRA output, i.e., core damage risk estimate. Comparison between the uncertainty in the core damage risk and an acceptability criterion is done to demonstrate the applicability of the acceptability evaluation feature (Module B in Fig. 1) of the PV methodology. The case study also demonstrated that, an importance ranking of contributing sources of epistemic uncertainty can be conducted (using Module C of the PV methodology, Fig. 1) to inform decision makers as to where additional research and empirical study should be prioritized to reduce the epistemic uncertainty most efficiently. While the PV methodology is illustrated for the Fire PRA context, the methodology itself is applicable for various simulation applications in different domains.

Although the case study in this work has been designed to include as many aspects of the PV methodology as practical, due to our limited resources, it is not all inclusive. Among the significant omissions are: (a) an explicit consideration and quantification of numerical approximation uncertainties (as discussed in Step 5 of the PV methodology in Ref. [34]) that can contribute to the total uncertainty in the fire simulation prediction; (b) a quantification of model-form uncertainty using the theoretical causal framework (Step 7B of the PV methodology in Ref. [34]); (c) a Bayesian updating to update the uncertainty in the simulation prediction using additionally available data (Step 9 of the PV methodology in Ref. [34]); (d) a consideration of multiple model forms (Step 10 of the PV methodology in Ref. [34]); and (e) treatment of dependencies (if any) among epistemic and aleatory uncertainties sources, such as those that may potentially exist among the input parameter uncertainties. In addition, the results of the PV methodology have not been benchmarked against other existing validation methods that are currently in use in several domains, as reviewed in Ref. [34]. Neither was it applied in a case where abundant experimental validation data are available to compare its results against empirical validation approaches. These topics will be considered in future work.

Despite these limitations, the case study in this work brings unique contributions and benefits for the Fire PRA of NPPs. Most significantly, the work has demonstrated a systematically and scientifically justifiable methodology to facilitate the validity assessment for simulation models used in Fire PRA in the context where experimental validation data are scarce (and empirical validation is not applicable). In addition, the work has advanced the uncertainty analysis practice in Fire PRA by providing a comprehensive uncertainty analysis framework. Furthermore, the study was the first to include a two-step sensitivity analysis approach for Fire PRA, first to screen out insignificant sources of uncertainty using the Morris EE analysis method, and then to rank the importance of the unscreened sources of epistemic uncertainty to identify the most significant contributors. Notably, the importance ranking is able to handle different types of uncertainty, including pure aleatory, pure epistemic, and mixed aleatory-epistemic uncertainties. From the PRA quantification perspective, the Fire I-PRA computational platform developed in this work, by utilizing the simulation-informed approach [64], can contribute to more explicit and accurate treatment of dependencies at multiple levels of Fire PRA (e.g., physical inputs, multiple targets, multiple spurious actuations, and PRA basic events). In all previous I-PRA studies, the PV methodology was not fully integrated into the probabilistic interface of the I-PRA framework to help validate the underlying simulation models. This integration is done for the first time in this paper for validating fire simulation models in Fire I-PRA. To computationalize Fire I-PRA, equipped with the PV methodology, this research has developed an automated RAVEN-based computational environment that can integrate the plant PRA scenarios, underlying fire simulations, and features of the PV methodology. The RAVEN-based computational platform helps facilitate the sampling-based uncertainty quantification for Fire PRA. In overall, the methodological and computational developments in this work help improve the realism of Fire PRA results and better support decision making in the risk-informed regulatory framework in which PRA results are an important input.

Acknowledgment

This work made use of the Illinois Campus Cluster, a computing resource, that is, operated by the Illinois Campus Cluster Program in conjunction with the National Center for Supercomputing Applications and is supported by funds from the University of Illinois at Urbana-Champaign. The authors would like to thank all members of the Socio-Technical Risk Analysis (SoTeRiA) Laboratory for their feedback on this paper.

Funding Data

  • U.S. Department of Energy's Office of Nuclear Energy through the Nuclear Energy University Program (NEUP) Project #19-16298: I-PRA Decision-Making Algorithm and Computational Platform to Develop Safe and Cost-Effective Strategies for the Deployment of New Technologies (Federal Grant #DE-NE0008885; Funder ID: 10.13039/100006147).

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

Footnotes

2

The validation domain is defined by the boundary of the physical experiments that have been conducted in association with the system of interest. This boundary can be represented as a multi-dimensional region where each dimension is associated with an input or control parameter that characterizes the system or its surroundings in those experiments. This boundary would normally represent the apparent limit where the model accuracy has been assessed.

3

The application domain refers to a set of conditions that the system of interest could be exposed to, and which the simulation model is supposed to address. These conditions may include environmental settings, initial conditions, and boundary conditions, etc. Note that the application domain is not a subset of the validation domain, though there can be an overlap between the two domains.

4

Note that model-form uncertainty is considered a source of epistemic uncertainty. Also, the numerical approximation uncertainty was skipped in this comparison because this case study considers it as an implicit factor (in Step 5).

References

1.
Siu
,
N.
,
Coyne
,
K.
, and
Melly
,
N.
,
2017
, “
Fire PRA Maturity and Realism: A Technical Evaluation (White Paper)
,” Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission.
2.
National Fire Protection Association
,
2001
, “
Performance-Based Standard for Fire Protection for Light Water Reactor Electric Generating Plants (NFPA 805)
,” National Fire Protection Association, Massachusetts, USA.
3.
U.S. Nuclear Regulatory Commission
,
2005
, “
EPRl/NRC-RES Fire PRA Methodology for Nuclear Power Facilities
, Volume 1: Summary and Overview,” ADAMS Accession No. ML052580075, Report No. EPRI 1011989 and NUREG/CR-6850.
4.
U.S. Nuclear Regulatory Commission
,
2005
, “
EPRl/NRC-RES Fire PRA Methodology for Nuclear Power Facilities
, Volume 2: Detailed Methodology,” ADAMS Accession No. ML052580118, Report No. EPRI 1011989 and NUREG/CR-6850.
5.
Canavan
,
K.
, and
Hyslop
,
J. S.
,
2010
, “
Fire Probabilistic Risk Assessment Methods Enhancements
,” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC 20555-0001, Report No. NUREG/CR-6850 (Supplement 1), EPRI 1019259.
6.
McGrattan
,
K.
,
2008
, “Cable Response to Live Fire (CAROLFIRE) Volume 3: Thermally-Induced Electrical Failure (THIEF) Model,” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC 20555-0001, Report No. NUREG/CR-6931, NISTIR 7472.
7.
Nowlen
,
S.
, and
Wyant
,
F.
,
2008
, “
Cable Response to Live Fire (CAROLFIRE)
Volume 2: Cable Fire Response Data for Fire Model Improvement,” U.S. Nuclear Regulatory Commission (USNRC), Office of Nuclear Regulatory Research (RES), Washington, DC, NUREG/CR-6931, Vol. 2.
8.
Nowlen
,
S.
, and
Wyant
,
F.
,
2008
, “Cable Response to Live Fire (CAROLFIRE), Volume 1: Test Descriptions and Analysis of Circuit Response Data,”
U.S. Nuclear Regulatory Commission (USNRC), Office of Nuclear Regulatory Research (RES)
, Washington, DC, NUREG/CR-6931, Vol. 1.
9.
Lewis
,
S.
, and
Cooper
,
S.
,
2012
, “
EPRI/NRC-RES Fire Human Reliability Analysis Guidelines
,” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC 20555-0001,
NUREG-1921.
10.
Lindeman
,
A.
, and
Cooper
,
S.
,
2020
, “
EPRI/NRC-RES Fire Human Reliability Analysis Guidelines—Qualitative Analysis for Main Control Room Abandonment Scenarios
,” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC 20555-0001,
NUREG-1921 Supplement 1
.
11.
McGrattan
,
K. B.
,
Lock
,
A. J.
,
Marsh
,
N. D.
,
Nyden
,
M. R.
,
Bareham
,
S.
, and
Michael
,
P.
,
2012
, “
Cable Heat Release, Ignition, and Spread in Tray Installations During Fire (CHRISTIFIRE)
, Phase 1: Horizontal Trays,” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC, (NUREG/CR-7010, Volume 1).
12.
McGrattan
,
K.
,
Scott
,
B.
, and
Stroup
,
D.
,
2013
, “
Cable Heat Release, Ignition, and Spread in Tray Installations During Fire (CHRISTIFIRE. Phase 2: Vertical Shafts and Corridors
,”
U.S. Nuclear Regulatory Commission Office of Nuclear Regulatory Research
, Washington, DC, NUREG/CR-7010, Volume 2.
13.
U.S. Nuclear Regulatory Commission
,
2012
, “
Direct Current Electrical Shorting in Response to Exposure Fire (DESIREE-Fire): Test Results
,” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC,
NUREG/CR-7100
.
14.
Taylor
,
G.
,
Melly
,
N.
,
Woods
,
H.
,
Pennywell
,
T.
,
Olivier
,
T.
, and
Lopez
,
C.
,
2013
, “
Electrical Cable Test Results and Analysis During Fire Exposure (ELECTRA-FIRE), a Consolidation of Three Major Fire-Induced Circuit and Cable Failure Experiments Performed Between 2001 and 2011
,”
U.S. Nuclear Regulatory Commission
,
Washington, DC
, Report No. NUREG-2128.
15.
Subudhi
,
M.
, and
Martinez-Guridi
,
G.
,
2014
, “
Joint Assessment of Cable Damage and Quantification of Effects From Fire (JACQUE-FIRE), Volume 2: Expert Elicitation Exercise for Nuclear Power Plant Fire-Induced Electrical Circuit Failure
,”
U.S. Nuclear Regulatory Commission
,
Washington, DC
,
Report No.
NUREG/CR-7150, Volume 2.
16.
U.S. Nuclear Regulatory Commission
,
2012
, “
Joint Assessment of Cable Damage and Quantification of Effects From Fire (JACQUE-FIRE), Volume 1: Phenomena Identification and Ranking Table (PIRT) Exercise for Nuclear Power Plant Fire-Induced Electrical Circuit Failure
,”
U.S. Nuclear Regulatory Commission
,
Washington, DC
, Report No. NUREG/CR-7150, Volume 1.
17.
Subudhi
,
M.
,
2017
, “
Joint Assessment of Cable Damage and Quantification of Effects From Fire (JACQUE-FIRE), Volume 3: Technical Resolution to Open Issues on Nuclear Power Plant Fire-Induced Circuit Failure
,”
U.S. Nuclear Regulatory Commission/Brookhaven National Laboratory/Electric Power Research Institute, Washington, DC
, NUREG/CR-7150, Volume 3, BNL-NUREG-98204-2012, EPRI 3002009214.
18.
Melly
,
N.
, and
Lindeman
,
A.
,
2015
, “
Nuclear Power Plant Fire Ignition Frequency and Non-Suppression Probability Estimation Using the Updated Fire Events Database-United States Fire Event Experience Through 2009
,”
U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC
,
NUREG-2169, EPRI 3002002936
.
19.
Electric Power Research Institute (EPRI),
2016
, “Fire Events Database Update for the Period 2010–2014: Revision 1,”
EPRI
,
Palo Alto, CA
, Report No. 3002005302.
20.
McGrattan
,
K.
,
Bareham
,
S.
, and
Stroup
,
D.
,
2016
, “
Heat Release Rates of Electrical Enclosure Fires (HELEN-FIRE)
,”
United States Nuclear Regulatory Commission
,
Washington, DC
,
Report No. NUREG/CR-7197.
21.
Taylor
,
G.
,
Cooper
,
S.
,
D'Agostino
,
A.
,
Melly
,
N.
, and
Cleary
,
T.
,
2016
, “
Determining the Effectiveness, Limitations, and Operator Response for Very Early Warning Fire Detection Systems in Nuclear Facilities (DELORES-VEWFIRE)
,”
U.S. Nuclear Regulatory Commission
,
Washington, DC
,
Report No. NUREG-2180.
22.
U.S. Nuclear Regulatory Commission
,
2015
, “
Refining and Characterizing Heat Release Rates From Electrical Enclosures During Fire (RACHELLE-FIRE)
Volume 1: Peak Heat Release Rates and Effect of Obstructed Plume,” Office of Nuclear Regulatory Research,
Washington, DC
,
Report No. NUREG-2178.
23.
U.S. Nuclear Regulatory Commission
,
2019
, “
Refining and Characterizing Heat Release Rates From Electrical Enclosures During Fire
Volume 2: Fire Modeling Guidance for Electrical Cabinets, Electric Motors, Indoor Dry Transformers, and the Main Control Board,” Office of Nuclear Regulatory Research, Washington, DC,
Report No. NUREG-2178-V2 / EPRI 3002016052/.
24.
U.S. Nuclear Regulatory Commission
,
2020
, “
Methodology for Modeling Fire Growth and Suppression Response of Electrical Cabinet Fires in Nuclear Power Plants
,” Office of Nuclear Regulatory Research, Washington, DC,
Report No. NUREG-2230 / EPRI 3002016051.
25.
Sakurahara
,
T.
,
Mohaghegh
,
Z.
,
Reihani
,
S.
,
Kee
,
E.
,
Brandyberry
,
M.
, and
Rodgers
,
S.
,
2018
, “
An Integrated Methodology for Spatio-Temporal Incorporation of Underlying Failure Mechanisms Into Fire Probabilistic Risk Assessment of Nuclear Power Plants
,”
Reliab. Eng. Syst. Saf.
,
169
, pp.
242
257
.10.1016/j.ress.2017.09.001
26.
Bucknor
,
M.
,
Denning
,
R.
, and
Aldemir
,
T.
,
2013
, “
Dynamic Uncertainty Quantification in Fire Progression Analysis
,”
Proceedings of the ANS PSA 2013 International Topical Meeting on Probabilistic Safety Assessment and Analysis (CD-ROM)
, Columbia, SC, September 22–26, 2013.
27.
Kloos
,
M.
,
Hartung
,
J.
,
Peschke
,
J.
, and
Roewekamp
,
M.
,
2014
, “
Advanced Probabilistic Dynamics Analysis of Fire Fighting Actions in a Nuclear Power Plant With the MCDET Tool
,”
Safety, Reliability and Risk Analysis: Beyond the Horizon
,
R.
Steenbergen
,
P.
VanGelder
,
S.
Miraglia
, and
A.
Vrouwenvelder
, eds., Taylor & Francis Group, London, pp.
555
562
.
28.
Forell
,
B.
,
Peschke
,
J.
, and
Kloos
,
M.
,
2015
, “
Analysis of a Fire Scenario by Combination of CFD Fire Simulation and the Monte Carlo Dynamic Event Tree Tool
,”
The 4th Magdeburg Fire and Explosion Protection Day
, Magdeburger Brand- und Explosionsschutztag,
Magdeburg
, Germany, Mar.
26
27
.
29.
Coyne
,
K.
, and
Siu
,
N.
,
2013
, “
Simulation-Based Analysis for Nuclear Power Plant Risk Assessment: Opportunities and Challenges
,”
Proceeding of the ANS Embedded Conference on Risk Management for Complex Socio-Technical Systems
, Washington DC, Nov.
10
14
.
30.
Mosleh
,
A.
,
2014
, “
PRA: A Perspective on Strengths, Current Limitations, and Possible Improvements
,”
Nucl. Eng. Technol.
,
46
(
1
), pp.
1
10
.10.5516/NET.03.2014.700
31.
Bui
,
H.
,
Sakurahara
,
T.
,
Pence
,
J.
,
Reihani
,
S.
,
Kee
,
E.
, and
Mohaghegh
,
Z.
,
2019
, “
An Algorithm for Enhancing Spatiotemporal Resolution of Probabilistic Risk Assessment to Address Emergent Safety Concerns in Nuclear Power Plants
,”
Reliab. Eng. Syst. Saf.
,
185
, pp.
405
428
.10.1016/j.ress.2019.01.004
32.
Mohaghegh
,
Z.
,
Kee
,
E.
,
Reihani
,
S.
A.,
Kazemi
,
R.
,
Johnson
,
D.
,
Grantom
,
R.
, et al.,
2013
, “
Risk-Informed Resolution of Generic Safety Issue 191
,” Proceedings of the
2013 International Topical Meeting on Probabilistic Safety Assessment and Analysis
, Columbia, SC, Sept.
22
26
.
33.
Sakurahara
,
T.
,
Mohaghegh
,
Z.
,
Reihani
,
S.
, and
Kee
,
E.
,
2018
, “
Methodological and Practical Comparison of Integrated Probabilistic Risk Assessment (I-PRA) With the Existing Fire PRA of Nuclear Power Plants
,”
Nucl. Technol.
,
204
(
3
), pp.
354
377
.10.1080/00295450.2018.1486159
34.
Bui
,
H.
,
Sakurahara
,
T.
,
Reihani
,
S.
,
Kee
,
E.
, and
Mohaghegh
,
Z.
,
2023
, “
Probabilistic Validation: Theoretical Foundation and Methodological Platform
,”
ASCE-ASME J. Risk Uncertainty Eng. Syst. Part B: Mech. Eng.
,
9
(
2
), p.
021204
.10.1115/1.4056883
35.
U.S. Nuclear Regulatory Commission
,
2007
, “
Verification and Validation of Selected Fire Models for Nuclear Power Plant Applications
Volume 5: Consolidated Fire Growth and Smoke Transport Model (CFAST),” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC, No. NUREG-1824.
36.
U.S. Nuclear Regulatory Commission,
2007
, “
Verification and Validation of Selected Fire Models for Nuclear Power Plant Applications
Volume 7: Fire Dynamics Simulation (FDS),” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC, No. NUREG-1824.
37.
Peacock
,
R. D.
,
McGrattan
,
K.
,
Forney
,
G. P.
, and
Reneke
,
P. A.
,
2016
, “CFAST—Consolidated Fire and Smoke Transport (Version 7)—Volume 1: Technical Reference Guide,”
National Institute of Standards and Technology
,
Gaithersburg, MD
.
38.
McGrattan
,
K. B.
,
McDermott
,
R. J.
,
Weinschenk
,
C. G.
, and
Forney
,
G. P.
,
2013
, “
Fire Dynamics Simulator (Version 6) User's Guide
,”
National Institute of Standards and Technology
,
Gaithersburg, MD
.
39.
Alfonsi
,
A.
,
Rabiti
,
C.
,
Mandelli
,
D.
,
Cogliati
,
J.
,
Wang
,
C.
,
Talbot
,
P. W.
,
Maljovec
,
D. P.
, and
Smith
,
C.
,
2019
, “
RAVEN Theory Manual
,”
Idaho National Laboratory
, Idaho Falls, ID.
40.
Bui
,
H.
,
Sakurahara
,
T.
,
Reihani
,
S.
,
Biersdorf
,
J.
, and
Mohaghegh
,
Z.
,
2020
, “
I-PRA Uncertainty Importance Ranking to Enhance Fire PRA Realism for Nuclear Power Plants
,”
Transactions of American Nuclear Society (ANS) Winter Meeting and Nuclear Technology Expo
,
Chicago, IL
.10.13182/T123-33477
41.
Biersdorf
,
J. M.
,
Bui
,
H.
,
Sakurahara
,
T.
,
Reihani
,
S.
,
LaFleur
,
C.
,
Luxat
,
D. L.
,
Prescott
, S. R.
, and
Mohaghegh
, Z.
,
2020
, “Risk Importance Ranking of Fire Data Parameters to Enhance Fire PRA Model Realism,”
Idaho National Laboratory, Idaho Falls, ID
.
42.
Bui
,
H.
,
Sakurahara
,
T.
,
Reihani
,
S.
,
Kee
,
E.
, and
Mohaghegh
,
Z.
,
2020
, “
Spatiotemporal Integration of an Agent-Based First Responder Performance Model With a Fire Hazard Propagation Model for Probabilistic Risk Assessment of Nuclear Power Plants
,”
ASCE-ASME J. Risk Uncertainty Eng. Syst. Part B: Mech. Eng.
,
6
(
1
), p.
011011
.10.1115/1.4044793
43.
Herman
,
J. D.
, and
Usher
,
W.
,
2017
, “
SALib: An Open-Source Python Library for Sensitivity Analysis
,”
J. Open Source Software
,
2
(
9
), p.
97
.10.21105/joss.00097
44.
Morris
,
M. D.
,
1991
, “
Factorial Sampling Plans for Preliminary Computational Experiments
,”
Technometrics
,
33
(
2
), pp.
161
174
.10.1080/00401706.1991.10484804
45.
Campolongo
,
F.
,
Cariboni
,
J.
, and
Saltelli
,
A.
,
2007
, “
An Effective Screening Design for Sensitivity Analysis of Large Models
,”
Environ. Modell. Software
,
22
(
10
), pp.
1509
1518
.10.1016/j.envsoft.2006.10.004
46.
Hu
,
Z.
,
Mahadevan
,
S.
, and
Du
,
X.
,
2016
, “
Uncertainty Quantification of Time-Dependent Reliability Analysis in the Presence of Parametric Uncertainty
,”
ASCE-ASME J. Risk Uncertainty Eng. Syst. Part B: Mech. Eng.
,
2
(
3
), p.
031005
.10.1115/1.4032307
47.
Sankararaman
,
S.
, and
Mahadevan
,
S.
,
2013
, “
Separating the Contributions of Variability and Parameter Uncertainty in Probability Distributions
,”
Reliab. Eng. Syst. Saf.
,
112
, pp.
187
199
.10.1016/j.ress.2012.11.024
48.
Voyles
,
I. T.
, and
Roy
,
C. J.
,
2015
, “
Evaluation of Model Validation Techniques in the Presence of Aleatory and Epistemic Input Uncertainties
,”
AIAA
Paper No. 2015-1374. 10.2514/6.2015-1374
49.
Ferson
,
S.
, and
Tucker
,
W. T.
,
2006
, “
Sensitivity Analysis Using Probability Bounding
,”
Reliab. Eng. Syst. Saf.
,
91
(
10–11
), pp.
1435
1442
.10.1016/j.ress.2005.11.052
50.
Salley
,
M. H.
, and
Wachowiak
,
R.
,
2012
, “
Nuclear Power Plant Fire Modeling Analysis Guidelines (NPP FIRE MAG)
,”
U.S. Nuclear Regulatory Commission Office of Nuclear Regulatory Research Washington, DC
, Final Report, No.
NUREG-1934
.
51.
Nowlen
,
S.
,
Wyant
,
F.
, and
McGrattan
,
K.
,
2008
, “Cable Response to Live Fire (CAROLFIRE), NUREG/CR-6931,”
U.S. Nuclear Regulatory Commission
,
Washington, DC
.
52.
Peacock
,
R. D.
,
McGrattan
,
K.
,
Forney
,
G. P.
, and
Reneke
,
P. A.
,
2021
, “
CFAST—Consolidated Fire and Smoke Transport (Version 7)—Volume 1: Technical Reference Guide
,” NIST Technical Note 1889v1, National Institute of Standards and Technology,
Gaithersburg
,
MD
.
53.
American Society of Mechanical Engineers
,
2012
, “
An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics (ASME V&V 10.1-2012)
,”
American Society of Mechanical Engineers
,
New York
.
54.
U.S. Nuclear Regulatory Commission,
2008
, “
A Phenomena Identification and Ranking Table (PIRT) Exercise for Nuclear Power Plant Fire Modeling Applications (NUREG/CR-6978)
,” U.S. Nuclear Regulatory Commission, Washington, DC.
55.
Wadsö
,
L.
,
Karlsson
,
J.
, and
Tammo
,
K.
,
2012
, “
Thermal Properties of Concrete With Various Aggregates
,”
Cement and Concrete Research
.https://api.semanticscholar.org/CorpusID:3579537
56.
U.S. Nuclear Regulatory Commission,
2017
, “Guidance on the Treatment of Uncertainties Associated With PRAs in Risk-Informed Decision Making (NUREG-1855, Revision 1),”
Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC
.
57.
Crespo
,
L. G.
, and
Kenny
,
S. P.
,
2022
, “
Synthetic Validation of Responses to the NASA Langley Challenge on Optimization Under Uncertainty
,”
Mech. Syst. Signal Process.
,
164
, p.
108253
.10.1016/j.ymssp.2021.108253
58.
Cheng
,
W.-C.
,
2022
, “
Finite Element Based Probabilistic Physics-of-Failure Analysis to Estimate Piping System Failure Rates for Risk Assessment of Nuclear Power Plants
,”
University of Illinois at Urbana-Champaign (UIUC), Champaign, IL
.
59.
Petzold
,
L. R.
,
1982
, “
Description of DASSL: A Differential/Algebraic System Solver
,”
Sandia National Labs
,
Livermore, CA
.
60.
Brenan
,
K. E.
,
Campbell
,
S. L.
, and
Petzold
,
L. R.
,
1995
,
Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations
,
SIAM, Philadelphia, PA
.
61.
Peacock
,
R. D.
,
Forney
,
G. P.
, and
Reneke
,
P. A.
,
2021
, “
CFAST—Consolidated Model of Fire Growth and Smoke Transport (Version 7)—Volume 3: Verification and Validation Guide
,”
National Institute of Standards and Technology
,
Gaithersburg
,
MD
.
62.
American Society for Testing and Materials
,
2011
, “
ASTM Standard Guide for Evaluating the Predictive Capability of Deterministic Fire Models (E1355-11)
,” ASTM International, West Conshohocken, PA.
63.
Taylor
,
G. J.
,
2012
, “
Evaluation of Critical Nuclear Power Plant Electrical Cable Response to Severe Thermal Fire Conditions
,” Master's thesis,
University of Maryland
,
College Park, MD
.
64.
Sakurahara
,
T.
,
Schumock
,
G.
,
Reihani
,
S.
,
Kee
,
E.
, and
Mohaghegh
,
Z.
,
2019
, “
Simulation-Informed Probabilistic Methodology for Common Cause Failure Analysis
,”
Reliab. Eng. Syst. Saf.
,
185
, pp.
84
99
.10.1016/j.ress.2018.12.007
65.
U.S. Nuclear Regulatory Commission
,
2018
, “
Regulatory Guide 1.174 (Revision 3): An Approach for Using Probabilistic Risk Assessment in Risk-Informed Decisions on Plant-Specific Changes to the Licensing Basis
,” U.S Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Washington, DC.
66.
U.S. Nuclear Regulatory Commission,
2015
, “Refining and Characterizing Heat Release Rates From Electrical Enclosures During Fire (RACHELLE-FIRE), Volume 1: Peak Heat Release Rates and Effect of Obstructed Plume (NUREG-2178),”
U.S. Nuclear Regulatory Commission
,
Washington, DC
.