Decision Support for Nuclear Emergency and Remediation
4.5 Uncertainty Modelling for the Case Study
Chapter 4.5. Uncertainty Modelling for the Case Study 115
distributions of variables are approximated by ensembles of values sampled according to the probability distributions, is suitable to cope with this task. This means, that multiple results (forming an ensemble) are calculated with the dispersion model, each based on one sampled value for the source term and one for the mean wind direction as input data.
The values sampled for the source term and the mean wind direction – relative to the deterministic values described in Section 4.4 – are listed in Table 4.6 for 10 samples as used for this research.31
The multiple results from the atmospheric dispersion model are transferred to the next model in the RODOS model chain, the deposition model, which calculates multiple results each based on one possible input from the dispersion model. The influence of parameter uncertainties in the ASY, for instance in the deposition model, is considered similarly by sampling multiple sets of model parameters according to pre-defined probability distri-butions, e.g. for the deposition time (air–soil) or the transfer rates (soil–plants as well as plants–animals), and applying one parameter set after the other when calculating the multiple model results. In a similar way, the multiple results are propagated through the food chain and dose model and the countermeasure simulation model, but without con-sideration of additional uncertainties arising from parameter uncertainties within these models.
Table 4.6: Sampled Values for Mean Wind Direction and Source Term Relative to Deterministic Values
Sample Deviation of mean wind direction Deviation of source term from No. from deterministic mean wind deterministic source term
direction
1 +−0 × 1.0
2 +−0 × 0.01
3 +30 × 1.0
4 +−0 × 100
5 -29 × 0.02
6 -40 × 0.007
7 +6 × 5.1
8 -24 × 0.9
9 +48 × 488
10 -4 × 1.2
31In terms of a Monte Carlo simulation, 10 samples are not representative. However, in view of the high computational effort of the RODOS simulations, a sample size of 10 is nevertheless considered to be appropriate to exemplarily demonstrate the developed concept.
Chapter 4.5. Uncertainty Modelling for the Case Study 117
The multiple Monte Carlo runs of the ASY and CSY models lead to multiple results for the consequences of the countermeasures. Thus, the multi-attribute decision analysis is not based on one (deterministic) decision table but on a set of decision tables where each table corresponds to one sample (realisation/scenario) which are simultaneously evaluated.
The complete set of decision tables can be found in Appendix B. Since the entries of Table 4.5 were assessed by the workshop participants and not calculated by RODOS, a justifiable uncertainty modelling would necessitate more workshops where the participants are explicitly asked to assess (possibly different) values for each scenario. Hence, within this research, these values remain deterministic and thus unchanged for the ten scenarios.
While from a theoretical point of view, probability is a unique way to represent uncer-tainty, the propagation of probability distributions through a complex model chain, such as in RODOS, is a highly challenging task in practice [O’Hagan and Oakley, 2004]. The original probability distributions of the high-dimensional input data (in this research, exemplarily the source term and the mean wind direction) are subject to a number of nonlinear transformations when being propagated through the model chain. In general, Monte Carlo simulation is an adequate method for such problems but it is nevertheless hardly possible to make a statement about the probability distributions of the simulated consequences in the decision table or about their relation to the original probability dis-tributions of the input data. Statistical tests, however, allow to analyse whether the data in the set of decision tables follow certain probability distributions. For instance, the W test introduced by Shapiro and Wilk [1965] allows to test a set of data for normality. The procedure of the W test and its application to the set of decision tables of the case study is described in Appendix B. It shows that the hypothesis of normality can be rejected for most of the consequences. Taking the logarithm of the data and subsequently apply-ing the test (i.e. investigatapply-ing whether the data is log-normally distributed) leads to less rejections of the hypothesis but the hypothesis is still rejected for many consequences.
Hence, it cannot be significantly concluded that the data in the set of decision tables are normally or log-normally distributed. Consequently, the corresponding expected utility calculations (cf. Section 2.3) will not be based on Equation 2.22, which presupposes nor-mal distributions, but rather on Equation 2.25, which calculates the expected utility for a discrete empirical distribution. However, except for the procedure described at the end of Section 3.4.1, the methods introduced in Chapter 3 do not presume normally distributed data and are thus entirely applied to the data of the case study.
4.5.2 Preferential Uncertainty Modelling
The preferential uncertainties are modelled by replacing the deterministic preference pa-rameters with intervals. Firstly, parameter intervals seem to model human preferences more realistically than discrete values. Secondly, if a decision is not to be taken by a single person but by a group, it will be easier for that group to agree on common pa-rameter intervals than on discrete values. It will easily be possible to find out whether or not the variation of certain preference parameters has an impact on the ranking of the alternatives. Thus, disagreements which do not affect the results can be eliminated from debate and the group can focus on discussing the differences that do matter in terms of the results [French, 2003]. For the hypothetical case study, the capability of the methods introduced in Section 3.3 is demonstrated for the n-dimensional triplet (w, ρ, xmax) (where n is the number of considered attributes). The parameter xmin is not varied since, in the case study, xmin is equal to zero for almost all considered attributes and negative values do not make sense.
4.5.2.1 Modelling Inter-Criteria Preferential Uncertainties
Concerning the inter-criteria preference parameters, instead of assigning precise weights to the attributes, it is sufficient to assign intervals. These intervals may differ in size if appropriate. For the case study, weight intervals between 10 % and 20 % around the discrete weights, which were used in the workshop, have been assigned. The exemplar intervals (as used within this thesis) are compiled in Table 4.7. These intervals can also be seen as representations of the linguistic imprecisions associated with the qualitative weight elicitation results described in Section 4.4.2.
It should be emphasised that it is important to choose the weight intervals in such a way that Cw ∩ H = ∅ (cf. Section 3.3.1). For instance, a comparison of the ranges of the assigned weight intervals and the ranges of the actually drawn weights (i.e. a comparison of Table 4.7 and the first diagram in Figure 4.23 on page 131) can provide the basis for such a consistency check.
Chapter 4.5. Uncertainty Modelling for the Case Study 119
Table 4.7: Assigned Weight Intervals
1st Level Criteria 2nd Level Criteria 3rd Level Criteria
[Weight interval] [Weight interval] [Weight interval]
rad. effectiveness population avoided ind. dose adults
[0.10 − 0.25] [0.80 − 0.95] [0.05 − 0.20]
avoided ind. dose children [0.15 − 0.30]
avoided collective dose [0.40 − 0.55]
received collective dose [0.05 − 0.20]
worker max. ind. worker dose
[0.05 − 0.20] [0.40 − 0.60]
collective worker dose [0.40 − 0.60]
resources no. of workers
[0.15 − 0.30] [0.45 − 0.60]
supplies [0.40 − 0.55]
impact total food above
[0.25 − 0.45] [0.25 − 0.40]
food above yr-1 [0.05 − 0.15]
size of aff. area [0.35 − 0.50]
costs [0.15 − 0.25]
acceptance public
[0.30 − 0.50] [0.40 − 0.55]
affected prod.
[0.20 − 0.30]
trade and ind.
[0.25 − 0.35]
4.5.2.2 Modelling Intra-Criteria Preferential Uncertainties
In order to support a group of decision makers in determining the shape(s) of the value function(s), they are allowed to assign intervals for the parameters ρi (which define the shapes) instead of precise values. To demonstrate the method, the parameters ρi for the 15 attributes are all varied between 0.5 and 10 for increasing preferences (or −10 and
−0.5 for decreasing preferences respectively).
In addition to the value functions’ curvatures, their domains’ boundaries are varied. The upper boundaries of the value functions’ domains are varied in an interval between the maximum of the occurring scores and the value augmented by 20 %.