Our perspective to risk is different. We adopt a perspective based on the following simple ideas or principles:
x Focus is placed on quantities expressing states of the “world”, i.e., quanti- ties of a physical reality or nature, that are unknown at the time of the analysis but will, if the system being analysed is actually implemented, acquire some value in the future, and possibly become known. We refer to these quantities as observable quantities. In the above example, the cost C is such a quantity. Other examples are number of fatalities, the occurrence of specified events, etc.
x The observable quantities are predicted. We predict the cost by C*.
x Uncertainty related to the observable quantities is assessed and expressed by means of probabilities. This uncertainty is epistemic, i.e., a result of lack of knowledge. In the example we assign probabilities P(Cdc), having a centre of gravity E[C].
The notion observable quantity is to be interpreted as a potentially observable quantity – for example, we may not actually observe the number of injuries (suitab-
ly defined) in a process plant although it clearly expresses a state of the “world”.
The point is that a true number exists and if sufficient resources were made avail- able that number could be found.
Focusing on the above principles would give a unified structure to risk analysis that is simple and in our view provides a good basis for decision-making.
To explain this perspective in more detail, we have to understand the probability concept, used in this way, to express uncertainties.
Consider the following example. If the possible outcomes are 0, 5 and 100, we may assign probability figures, say 0.89, 0.10 and 0.01, respectively, correspon- ding to the degree of belief or confidence we have in the different values. We may also use odds; if the probability of an event A is 0.10, the odds against A are 9:1.
The reference is a certain standard such as drawing a ball from an urn. If we assign a probability of 0.10 for an event A, we compare our uncertainty of A to occur with drawing one specific ball from an urn containing 10 balls. The assignments are based on available information and knowledge; if we had sufficient information we would be able to predict with certainty the value of the quantities of interest. The quantities are unknown to us as we have lack of knowledge about how people would act, how machines would work, etc. System analysis and modelling would increase our knowledge and thus, we hope, reduce uncertainties. In some cases, however, the analysis and modelling could in fact increase our uncertainty about the future value of the unknown quantities. Think of a situation where the analyst is confident that a certain type of machine is to be used for future operation. How- ever, a more detailed analysis may reveal that other types of machines are also being considered. As a consequence, the analyst’s uncertainty about the future performance of the system may increase. Normally we would be far from being able to see the future with certainty, but the principle is the important issue here – uncertainties related to the future observable quantities are epistemic, that is, a result of lack of knowledge.
In the example in Section A.1 a probability p equal to 0.5 x 10-6 is introduced.
This probability expresses the analyst’s assessment of uncertainty for D to be equal to –1 million NOK, or using other words, the confidence the analyst has in this outcome.
Historical data are also a key source of information when we adopt this per- spective. To illustrate, consider the following example. An analyst group wishes to express uncertainty related to the occurrence of an event. Suppose that the observa- tions show three “successes” out of 10. Then we obtain a probability of 0.3. This is our (i.e. the analyst's) assessment of uncertainty related to the occurrence of the event.
This method is appropriate when the analyst judges the observational data to be relevant for the uncertainty assessment of the event, and the number of observa- tions is large. What is considered sufficiently large, depends on the setting. As a general guideline, we find that about 10 observations are typically enough to speci- fy the probabilities at this level using this method, provided that not all observa- tions are either “successes” or “failures”. In this case the classical statistical proce- dure would give a probability equal to 1 or 0, which we would normally not find adequate for expressing our uncertainty about the event. Other procedures then ha- ve to be used, either expert judgments or a full Bayesian analysis, see Aven (2003).
In the production system example above, we may use the experience data as a starting point for assessing the uncertainties about C, the cost of the production system being analysed. The mean of the observations could be used as a prediction ofC, if found relevant, and as the expected value of our uncertainty distribution of C.
Note that the probability assigned following this procedure is not an estimate of an underlying true probability as in the classical setting.
All probabilities are conditioned on the background information (and know- ledge) that we have at the time we quantify our uncertainty. This information covers historical system performance data, system performance characteristics (such as policies, goals and strategies of a company, type of equipment to be used, etc.), knowledge about the phenomena in question (such as fire and explosions, human behaviour, etc.), decisions made, as well as models used to describe the world. Assumptions are an important part of this information and knowledge. We may assume, for example, in an accident risk analysis, that no major changes in the safety regulations will take place for the time period considered, the plant will be built as planned, the capacity of an emergency preparedness system will be so and so, an equipment of a certain type will be used, etc. These assumptions can be viewed as frame conditions of the analysis and the produced probabilities must always be seen in relation to these conditions. If one or more assumptions are drop- ped, this would introduce new elements of uncertainty to be reflected in the proba- bilities. Note, however, that this does not mean that the probabilities are uncertain.
What is uncertain is the observable quantities. For example, if we have established an uncertainty distribution p(c|d) over the investment cost c for a project, given a certain oil price d, it is not meaningful to talk about uncertainty of p(c|d) even though d is uncertain. A specific d gives one specific probability assignment, a procedure for determining the desired probability. By opening up for uncertainty assessments in the oil price d, more uncertainty is reflected in our uncertainty dis- tribution for c, using the law of total probability. The point is that in our framework uncertainty is related only to observable quantities, not assigned probabilities.
The basic idea that there is only one type of uncertainty is sometimes question- ed. It is felt that some probabilities are easy to assign and feel sure about, others are vague and it is doubtful that the single number means anything. Should not the vagueness be specified? To provide a basis for the reply, let us remember that a probability P(A) is in fact a short version of a conditional probability of A given the background information K,i.e.P(A) = P(A|K). This means that even if we assign the same probability for two probabilities, they may be considered different as the background information is different. In some cases we may know a great deal about the process and phenomena leading to the event A, and in other cases very little, but still we assign a probability of say 0.50 in both cases. However, if we consider several similar events of the type A, i.e. we change the performance measure the difference in the background information will often be revealed. An illustrating example of this is given by Lindley (1985, p. 112). Thus care has to be shown when defining the performance measures and when evaluating probabilities in a decision making context. We always need to address the background informa- tion, as it provides a basis for the evaluation.
Next we offer some reflections on empirical control when probability is used as a measure of uncertainty, inspired by the discussion by Cooke (1992). A probabi- lity in our context is a measure of uncertainty related to an observable quantity C, as seen from the assessor’s point of view, based on his state of knowledge. There exists no true probability. In principle an observable quantity can be measured, thus probability assignments can to some extent be compared to observations. We write “in principle” as there may be practical difficulties in performing such measurements. Of course, one observation as a basis for comparison with the assigned probability is not very informative in general, but in some cases it also possible to incorporate other relevant observations and thus give a stronger basis.
“Empirical control” does not, however, apply to the probability at the time of the assignment. When conducting a risk analysis we cannot “verify” an assigned probability, as it expresses the analyst’s uncertainty based on prior observations.
What can be done is to review the background information used as the rationale for the assignment, but in most cases it would not be possible to explicitly document all the transformation steps from this background information to the assigned probability. We conclude that a traditional scientific methodology based on empirical control cannot and should not be applied for evaluating such probabilities.
A risk analyst is an expert on uncertainty assessments and probability assign- ments, so ensuring coherence in the assessments and assignments should not be a serious problem. The rules of probability calculus apply. There are a number of potential pitfalls in such assignments, but the risk analysts should be aware of them, and make use of tools to avoid them. Such tools include calibration proce- dures, use of references probabilities, standardisation, and more detailed model- ling. We refer to Aven (2003) for further details.
For probabilities of this kind, the term “subjective probability”, or related terms such as “personalistic”, are often used. In our presentation we have avoided such terms as they seem to indicate that the results you present as an analyst are subjec- tive while the results from others who adopt an alternative risk analysis approach are seen as objective. Why should we always focus on subjectivity? In fact all risk analysis approaches produce subjective risk results, the only reason for using the word “subjective” is that this is its original, historical name. We prefer to use
“probability as a measure of uncertainty”, and make it clear who is the assessor of the uncertainty, since this is the way we interpret a subjective probability and we avoid the word “subjective”.
Our perspective means that risk comprises the two dimensions: a) possible consequences and b) associated uncertainties. As there are many facets of these dimensions, the framework offers a broad perspective on risk, reflecting for example the fact that there may be different assessments of uncertainties, as well as different views on how these uncertainties should be handled. This extends the common definition that risk is the combination of possible consequences and associated probabilities, as probability is the way we measure the uncertainties.
However, we acknowledge that there are problems and limitations in using proba- bilities to measure uncertainties. This relates to our ability to express uncertainties in terms of probabilities, and acknowledges the fact not all relevant factors for decision-making can be revealed through reference to probabilities. Our analysis
and framework for risk management and decision-making in this book are based on this perspective.
Bibliographic Notes
This appendix is based on Aven (2003), Aven and Kristensen (2005) and Aven et al. (2004). This first reference provides a comprehensive presentation and discus- sion of different perspectives on risk and risk analysis, as well as a bibliographic review of relevant works on this topic.
Example, ALARP Demonstration
This appendix presents an illustration of how an ALARP demonstration should be conducted and documented. The ALARP demonstration is made for the theoretical case presented in Chapter 5 of the book, see page 101. It is associated with modifi- cation of a production installation in the operations phase, and the need to deter- mine the extent to which protection of escape ways should be provided for that installation.
Chapter 5 presents background information for this case study, which may be used as reference for the ALARP demonstration. Some of the information in Chapter 5 is repeated here, in order to increase readability.