• Tidak ada hasil yang ditemukan

Results Taking Data Uncertainties into Account

Decision Support for Nuclear Emergency and Remediation

4.6 Results for the Case Study

4.6.2 Results Taking Data Uncertainties into Account

Chapter 4.6. Results for the Case Study 123

decision makers. However, an exact aggregated ranking in the PCA plane can be obtained by projecting the alternatives orthogonally on the weighting vector (in the 15-dimensional original data space of the attributes) and by subsequently determining the order of the respective perpendiculars on w. The “projections of these projections” are shown as dashed lines in Figure 4.14. The intersections of these dashed lines with the decision axis [−π, +π] allow to read off the ranking of the alternatives.

In addition to the eight considered alternatives, the two fictitious alternatives IDEAL and NADIR are displayed in Figure 4.14. These two points can provide valuable support in assessing the quality of the alternatives based on their relative positions in comparison to these points, i.e. alternatives projected close to the IDEAL, have high performance scores, while alternatives projected near the NADIR do not correspond to the preferences of the decision makers. The projections of the IDEAL and NADIR alternatives intersect the decision axis in −π and π respectively (in the axis’ ends).

Beyond the deterministic illustrations presented in Figure 4.13 and Figure 4.14, visualisa-tions of the effect of the different types of uncertainty on the results of the case study are shown in the following. Starting from results of investigating the impact of data uncer-tainties (Section 4.5.1) and parameter unceruncer-tainties (Section 4.6.3) individually, results of a combined consideration of both types of uncertainty are presented in Section 4.6.4.

No Action Proc Stor Rmov,T=0 Rmov,T>0 Rduc,T=0 AddS+Proc

Disp

No Action Proc Stor Rmov,T=0 Rmov,T>0 Rduc,T=0 AddS+Proc

Disp

Figure 4.15: Expected Utilities for κ = 0.5 Visualised as Bar Chart

As described in Section 2.3, two types of sensitivity analyses can be shown besides the expected utilities. Firstly, the sensitivity of the expected utilities with respect to the parameter κ, reflecting the risk attitude, needs to be analysed. This is shown in Figure 4.16. While in general, the expected utilities decrease when κ increases, no rank reversals are observable when varying κ.

κκ

Figure 4.16: Sensitivity of Expected Utilities with Respect to κ

Secondly, it is important to analyse the sensitivity of the expected utilities with respect to weight changes. For instance, Figure 4.17 shows a sensitivity analysis for the weight of the criterion “acceptance”. In contrast to the “standard sensitivity analysis” carried out in Section 4.4, the alternatives are not represented by straight lines but by curves.

Chapter 4.6. Results for the Case Study 125

However, these curves do not intersect twice for the case study, which is generally possible as described in Section 2.3.

wacceptance wacceptance

Figure 4.17: Sensitivity of Expected Utilities with Respect to the Weight of “acceptance” (for κ = 0.5)

Figure 4.17 shows that the alternatives “Rmov,T=0” and “Disp” perform more or less equally well for the current weight of acceptance (approximately 38 %). For a higher weight, “Rmov,T=0” receives the highest expected utility, for a weight between approx-imately 25 % and 38 %, “Disp” turns out to be the most preferred alternative and for a weight smaller than 25 %, “Proc” receives the highest expected utility. In addition, the dashed vertical lines in Figure 4.17 show the limits of the weight interval assigned to the criterion “acceptance” (cf. Table 4.7) which allows to investigate whether or not changes in the ranking occur when varying the weight of “acceptance” within these limits. While the performance scores of “Rmov,T=0” and “Disp” are very similar, Figure 4.17 does show that these two alternatives dominate the others within the assigned weight limits.

The simulation based approach to simultaneously vary this weight and the weights of the other criteria within the respective intervals is demonstrated in Section 4.6.3.

While Figure 4.15 provides an aggregated overview, it disguises the fact that changes in the ranking can occur in consequence of the underlying data uncertainty, i.e. that different alternatives may be most preferable in the different scenarios. Figure 4.18 shows the overall performance scores of all alternatives in all scenarios, sorted in ascending order of the performance score of “Rmov,T=0”. While Figure 4.15 shows a slightly higher expected utility for “Disp” in comparison to “Rmov,T=0”, Figure 4.18 shows that, when carrying out deterministic analyses for each scenario in parallel, “Rmov,T=0” receives the highest score in five, “Disp” in four and “Rmov,T>0” in one of the ten scenarios.

Figure 4.18: Overall Performance Scores for the Different Scenarios

Thus, since the ranking of the alternatives can obviously change as a result of the under-lying uncertainty, the focus of the rest of the section is on visualisation techniques that are aimed at explicitly illustrating and communicating the uncertainties associated with the results of the decision analysis while seeking to not cause an information overload.

In addition to the information provided by Figure 4.18, it would be supportive to obtain information about the respective contributions of the individual criteria to the results and to the uncertainties in the results. In order to achieve this goal, an illustration by means of a stacked-bar chart, as proposed in Section 3.2, can be useful (see Figure 4.19).

Figure 4.19: Visualisation of Uncertainties in Results Using a Stacked-Bar Chart

Chapter 4.6. Results for the Case Study 127

In order to illustrate the uncertainty ranges, the simultaneously calculated results are not all visualised but the results of the scenarios corresponding to the 5 %- and 95 %-quantiles (of the overall performance score) are shown alongside the results of the most probable scenario in Figure 4.19. As stated in Section 3.2, these scenarios will be referred to as worst case and best case scenarios respectively.

Figure 4.19 contains important information for the decision makers. Using a stacked-bar chart for the visualisation of the results does not only allow to investigate the uncertainty ranges of the overall goal but also to explore which of the considered criteria are subject to uncertainties and shows the uncertainty ranges of the individual criteria as well as their contribution to the uncertainties in the overall ranking. Furthermore, the proposed stacked-bar chart allows to analyse the distinguishability of the alternatives. For the con-sidered case study, it is hard to distinguish between the alternatives “Disp”, “Rmov,T=0”

and “Rmov,T>0” in consequence of their very similar performance scores.

In addition to Figure 4.19, the application of PCA provides a good overview on the effect of the data uncertainties and allows to graphically explore the distinguishability of the alternatives in the PCA plane (see Figure 4.20). Such a visualisation allows to explore whether or not the different alternatives can be evaluated meaningfully based on the considered attributes and the uncertainties afflicted with the data in the decision tables of the different scenarios.

no. of workers public

trade and ind.

affected prod.

costs

size of aff. area

supplies

collective worker dose max. ind.

worker dose food above

yr-1

no. of workers public

trade and ind.

affected prod.

costs

size of aff. area

supplies

collective worker dose max. ind.

worker dose food above

yr-1

Figure 4.20: PCA Visualising the Uncertainty of the Data in the Different Scenarios

The range of variation of an alternative due to the underlying data uncertainty is rep-resented by the complete set of points in the plane corresponding to this alternative.

For instance, Figure 4.20 shows that the alternatives “Rmov,T=0”, “Rmov,T>0” and

“Rduc,T=0” are not clearly distinguishable as a result of the uncertainties. Furthermore, the sets of points corresponding to the alternatives “Proc” and “AddS+Proc” overlap to a large extent. However, considering the overlapping alternatives as groups of alternatives respectively, it can be said that the different groups are clearly distinguishable from each other.