• Tidak ada hasil yang ditemukan

Assessment of regulatory compliance

6.1 Model validation challenge problems: thermal application

6.1.4 Assessment of regulatory compliance

L to emphasize that these training points do not all share the same value ofL for whichδ(·) has been plotted.

-40 -30 -20 -10 0 10 20 30

500 1000 1500 2000 2500 3000 3500 4000

Model bias, δ (°C)

Applied heat flux, q (W/m2) L = 1.27 cm

L = 2.54 cm

L = 1.27 cm L = 2.54 cm

L = 1.9 cm

L = 1.9 cm

Expected value 80% Uncertainty band Data Application configuration

Figure 6.5: Conditional expected value and uncertainty bands for the model inadequacy func- tion, plotted as a function of applied heat flux forL= 1.9cm

Otherwise, the prior distribution for θ, the response quantity, and the distribution of the errors εi are all the same as in approach one. The only differences between the two ap- proaches are the inclusion of the model inadequacy function for approach two and the use of a temperature-dependent model fork in approach one. As with the first approach, Markov Chain Monte Carlo sampling is used to construct the posterior distribution forθ.

Section 6.1.4 discusses how these results can be used to estimate the probability of failure of a device in the application domain, along with confidence bounds for the assessment.

statement of confidence in this assessment. The probability of failure for the device in the application configuration is defined as

pf =P[T(t= 1000s)>900C], (6.4)

and regulatory compliance is said to be achieved if the probability of failure is less than 0.01.

The analysts are also asked to provide a (preferably quantitative) “level of confidence” about whether or not the regulatory condition will be met.

This objective is addressed below separately for each of the two calibration approaches.

The calibrated models are used to predict the probability of failure, and in each case a variety of uncertainty sources are taken into account to construct a representation of the uncertainty in this prediction.

Approach 1

When using the results of Bayesian calibration for probability of failure prediction it is im- portant to maintain the distinction between aleatory variability (in this case characterized by the random variables k and ρC) and the residual uncertainty in the calibration parameters represented by the posterior distribution f(θ | d). In this case, each of the two calibration parameters represent locational distribution parameters governing the variability inkandρC.

It is easy to see that the probability of failure, conditionalon the distribution parameters governing the random variablesk andρCcan be expressed as

pf0, β1, σ2k, µρC, σρC2 = Z

f(k, ρC)dk dρC, (6.5)

whereΩis the failure region, which is given by

G(k, ρC,s)>900C, (6.6)

f(k, ρC) is the joint probability density function for k and ρC, which are treated as inde- pendent random variables with probability distributions k ∼ N(β01T, σk2) and ρC ∼ N(µρC, σ2ρC);s = (q = 3500, L = 0.019), which defines the application configuration; and

T is the representative temperature for the application region.

There is a small problem in determining T, because no experimental observations are available for the application configuration. The procedure used here is to apply the thermal model with a nominal value ofk(the mean of the material characterization data) and the given value ofµρC to predict the temperature att = 500seconds and take this prediction asT. Note that with this procedureT depends onµρC.

With the expression of Eq. (6.5), it is possible to define the posterior distribution of pf, which represents the uncertainty in the failure probability based on the residual uncertainty af- ter calibration of the calibration parametersθ= (β0, µρC). However, this notion can be taken a step further: note that the variancesσ2kandσ2ρCare not known exactly, but are instead estimated based on the finite samples provided via the material characterization data. Using Bayesian in- ference, it is also possible to incorporate this uncertainty into the uncertainty representation for pf.

Since ρC is independent of temperature, the probability model ρC ∼ N(µρC, σ2ρC) has been used. If σ2ρC andµρC are given the standard reference prior π(µρC, σρC2 ) ∝ 1/σρC2 , then the marginal posterior distribution for the variance in light of the material characterization data

dρC = (ρC1, . . . , ρC30)is given by (Lee, 2004):

σρC2 |dρC ∼Sχ−2n−1, (6.7)

which is a multiple of what is known as an inverse chi-squared distribution, where S = Pn

i=1(ρCi−ρC)¯ 2.

Recall that the variance for k derives from the linear model of Eq. (6.2), where the εi,k are taken to be i.i.d. normal with zero mean and varianceσk2. Given the usual reference prior π(β0, β1, σk2)∝1/σk2, the marginal posterior distribution for the variance is (Lee, 2004)

σk2 |dk,T ∼Seeχ−2n−2, (6.8)

where See = Syy −Sxy2 /Sxx, Syy = P

(ki −¯k)2, Sxx = P

(Ti −T¯)2, and Sxy = P (Ti − T¯)(ki−¯k).

Now, the posterior distribution forpf can be constructed such that it accounts for the resid- ual uncertainty in the calibration parameters, as well as the uncertainty in the material property variances due to their being estimated based on finite data. This posterior can be constructed using a two-loop sampling scheme in which the outer loop generates samples ofβ0, µρC, σk2, andσ2ρCaccording to their posterior distributions (this sampling is achieved here via MCMC).

For each such realization, the inner loop estimates the corresponding conditional failure prob- ability, defined by Eq. (6.5). The result is a list of samples ofpf that constitute the posterior uncertainty distribution for the failure probability. The resulting uncertainty distribution forpf is illustrated in Figure 6.6 below.

The expected value ofpf, which is taken here to be the mean of its posterior distribution, is

0 0.05 0.1 0.15 0.2 0.25 0.3

Probability Density

Failure Probability

Figure 6.6: Uncertainty distribution forpf based on calibration approach number one 0.11, which is significantly higher than the regulatory requirement specification of0.01. One

possible quantification of the level of confidence that the regulator condition will be met is given by the fraction of the uncertainty distribution forpf that is greater than0.01:

Z

pf>0.01

f(pf |d)dpf, (6.9)

which is found to be 0.9999 (based on 20,000) samples. Thus, one interpretation of this result is that there is a 99.99% level of confidence that the regulatory condition specified by Eq. (6.4) will not be met.

Approach 2

As mentioned above, when using Bayesian calibration results for probability of failure predic- tion, it is important to differentiate between aleatory (true variability) and epistemic (lack of knowledge) uncertainties. The probability of failure condition defined by Eq. (6.4) is the result of specimen-to specimen variability manifested through the treatment of the material proper- tieskandρCas random variables (aleatory uncertainty). However, in the Bayesian calibration analysis, the calibration parametersθ = (µk, µρC)are treated as random variables, but this is

an epistemic uncertainty, and must be considered separately because it does not contribute to actual variability of the response.

As with approach one, a conditional failure probability is first defined. In this case, this failure probability is conditional on the material property distribution parameters, as well as the model bias:

pfk, σ2k, µρC, σρC2 , δ= Z

f(k, ρC)dk dρC, (6.10)

whereΩis the failure region, given by

G(k, ρC,s) +δ >900C, (6.11)

andf(k, ρC)is the joint probability density function for kandρC, which are treated as inde- pendent random variables with probability distributionsk ∼N(µk, σk2)andρC ∼N(µρC, σρC2 ).

This conditional failure probability can be computed with simple Monte Carlo simulation.

As with approach one, an uncertainty distribution forpf will be developed that accounts not only for the residual uncertainty in the calibration parameters, but also for the residual uncertainty in the material property variances. Although the temperature-dependent model for k is not used in this approach for obtaining model predictions, such a model should still be acknowledged when estimating the variance in k. As such, the posterior distribution for σk2 given by Eq. (6.8) is used, as with approach one. The previously used posterior distribution for σρC2 , given by Eq. (6.7), is also employed.

The residual uncertainty in the model bias,δis also accounted for. At each iteration of the outer loop,δis sampled from its posterior distribution, which in this case is given by

δ(s)|d∼N(−0.27,19.0)C. (6.12)

As described in the corresponding discussion for approach one, the posterior uncertainty distribution forpf is constructed using a two-loop sampling scheme. The resulting distribution for pf is illustrated in Figure 6.7. The expected value of pf is 0.19, and the confidence level that the regulatory requirement will not be met, given by Eq. (6.9), is 99.29%.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Probability Density

Failure Probability

Figure 6.7: Uncertainty distribution forpf based on calibration approach number two

There is clearly much more uncertainty in this estimate ofpf than there was with approach one. This is most likely attributable to the introduction of the model inadequacy function,δ(·).

Not only does the model bias at the application configuration contain a significant amount of uncertainty (see Eq. (6.12)) that contributes to uncertainty in model predictions, but the presence of δ(·) within the calibration analysis as an uncertain term contributes additional uncertainty to the inference about the calibration parameters,θ = (µk, µρC).

In a sense, the uncertainty in the model inadequacy function manifests itself twice in the calibrated model predictions. Further, Kennedy and O’Hagan (2001) proposed thatδ(·)is not considered as a function of the calibration inputs,θ. However, in reality, it is highly unlikely that the model bias is independent of θ. Since the calibration procedure considers various different values ofθ, it might make sense to attempt to account for the relationship betweenδ andθ. Such an approach might also help to reduce the “double-counting” of the uncertainty

associated with the model bias.