The Quantification of Uncertainty
2.3 OVERVIEW OF THE DIFFERENT INTERPRETATIONS OF PROBABILITY
2.3.2 The Different Kinds of Probability
In tracing the historical development of probability we have seen that it has had various inter- pretations, starting from Bernoulli’s notion of probability as a part of certainty, to De Moivre’s equally likely cases, Laplace’s degree of knowledge, the empiricists’ relative frequency and finally to the subjectivists’ degree of belief. This has prompted many, like Good (1950) and Savage (1954), to classify the different kinds of probability, and to discuss the nature of each.
The purpose of this section is to examine this classification, so that the relevance of each type to problems in reliability, risk and survival analysis can be judged; see, for example, Bement, Booker, Keller-McNulty and Singpurwalla (2003). Following Good (1965, p. 6), we shall broadly classify probabilities as being either logical, physical, or psychological, with some having further sub-classifications. The former two are called objective probabilities, and the third is called epistemological probability.
Logical Probability
A logical probability (or what is also known as credibility) is a rational intensity of conviction, implicit in the given information such that if a person does not agree with it, the person is
1Though it was Kolmogorov (1963) who stated: ‘The frequency concept, based on the notion of limiting frequency as the number of trials increases to infinity, does not contribute anything to substantiate the applicability of the results of probability theory to real practical problems where we have always to deal with a finite number of trials. The frequency concept applied to a large but finite number of trials does not admit a rigorous formal exposition within the framework of pure mathematics’.
OVERVIEW OF THE DIFFERENT INTERPRETATIONS OF PROBABILITY 17 wrong. This notion of probability has its roots in antiquity (cf. Savage, 1972); its more recent leading proponents have been Carnap (1950), Jeffreys (1961) and Keynes (1921), all of whom formulated theories in which credibilities were central. Savage (1972) calls this the ‘necessary’ or the ‘symmetry’ concept of probability and does not subscribe to its existence. According to this concept, there is one and only one opinion justified by any body of evidence, so that probability is an objective logical relationship between an eventand evidence. Savage and also Good claim that both Keynes and Carnap have, in the latter parts of their work, either renounced this notion or have tempered it. The logical probability concept has not been popular with statisticians, though Jeffreys has used it to address many practical problems. Other objections to this notion of probability have recently been raised by Shafer (1991). Cowell et al. (1999) claim that no satisfactory theory or method for the evaluation of logical probabilities has yet been devised.
Physical Probability
A physical probability (also called propensity, material probability, intrinsic probability or chance) is a probability that is an intrinsic property of the material world, just like density, mass or specific gravity, and it exists irrespective of minds and logic. Many people subscribe to the existence of such probabilities, especially those who, following Venn (1866) and von Mises (1939), interpret probability as a relative frequency. This happens to be the majority of statis- ticians; the influential Neyman–Pearson approach to statistical inference is based on a relative frequency interpretation of probability (Neyman and Pearson, 1967). The words ‘frequentist’ or
‘frequentist statistics’ are often used to describe such statisticians (and their procedures). Much of the current literature in reliability and survival analysis, including several government stan- dards for acceptance sampling and drug approval, has been developed under the paradigm that probability is a relative frequency. Here, the probability of an event is the long-run frequency with which the event occurs in a certain experimental setup or a certain population. Specifically, suppose that a certain experiment is performedntimes under ‘almost identical’ conditions, and suppose that an event of interestoccurs inkof thentrials of the experiment. Then the relative frequency of the event is the ratiok/n. If asnincreases, this ratio converges to a number, sayp, thenpis defined to be the probability of the event.
There have been several criticisms of this interpretation of probability. For one, the concept is applicable to only those situations for which we can conceive of a repeatable experiment.
This excludes many ordinary isolated events, such as the release of radioactivity at a nuclear power plant, the guilt or innocence of an accused individual, or the risk of commissioning a newly designed aircraft or a newly developed medical procedure. There are many events in our daily lives whose probabilities we would like to know, but these probabilities would not be available in the frequency sense. Another objection to this notion of probability is that the conditions under which the repeatable experiments are to be performed are not clear. What does it mean to say that the experiments are to be performed under almost identical conditions? If they are performed under exactly identical conditions we will always get the same outcome, and the ratiok/nwill equal to 1 or 0. How much deviation should we allow from the conduct of one experiment to the next? Finally, how large should nbe allowed to get before the limitp is obtained, and how close topshould the ratiok/nget? Whereas the relative frequency view of probability makes Kolmogorov’s axioms easy to justify, there are still some concerns about the adequacy of this point of view for interpreting conditional probability and independence (cf. Shafer, 1991). Kolmogorov simply defined conditional probability as the ratio of two unconditional probabilities, but offered no interpretation that could be used to assess conditional probabilities directly. A consequence is our inability to interpret the law of total probability, mentioned in section 2.4. These and other concerns, such as inadmissibility (cf. Basu, 1975;
Cornfield, 1969; Efron, 1978), have recently caused many statisticians, and also others, such as economists, engineers, and operation research analysts, to rethink this empiricist view of
probability, and to explore other alternatives. In this book, we will not subscribe to the relative frequency interpretation of probability (footnote 2); this is a feature that distinguishes this work in reliability from that of the others.
Psychological Probability
A psychological probability is a degree of belief or intensity of conviction that is used for betting, or making decisions, not necessarily with an attempt at being consistent with our other opinions.
When a person uses a consistent set of psychological probabilities, then these probabilities are calledsubjective probabilities; see below. A consistent set of probabilities is one against which it is not possible for an opponent to make a selection of bets against which you are bound to loose, no matter what happens. Both de Finetti (1937) and Savage (1972) regard subjective probability as the only notion of probability that is defensible, and most Bayesian statisticians subscribe to this notion of probability. Savage (1972) calls subjective probability personal probability, and describes it as a certain kind of numerical measure of the opinions of somebody about something.
The point of view of probability that we strive to adopt here is subjective or personal; thus it behooves us to elaborate on this notion and to discuss its pros and cons.
Making Personal (or Subjective) Probability Operational
The notion of a personal probability of an event was made operational by de Finetti (1937), who defined it as the price that you would pay in return for a unit of payment to you in case the event actually occurs. Thus, for example, if you declared that your personal probability for some event, say, is .75, then this means that you are willing to put $0.75 on the table on, if the person you are betting with is willing to put $0.25 on the table against. If the event occurs, you win the other person’s $0.25; ifdoes not occur, you lose your $0.75 to the other person. Put another way, when you declare that your personal probability for an eventis .75, then you are de facto paying $0.75 for a ticket that returns $1 ifhappens. It is important to recognize that in taking the above bet, you are also willing to take the other side of the bet. That is, you are willing to paying $0.25 for a ticket that returns $1 ifdoes not happen. Therefore, according to de Finetti, the probability of a proposition is the price at which you are neutral between buying and selling a ticket that is worth $1 if the proposition is true, and is worthless otherwise. Alternatively put, probability is a two-sided bet. Since there is no physically realizable way to exchange more than a finite number of dollars, de Finetti insisted that the number of transactions be limited to a finite number of sales and purchases. Finally, even though we used the US monetary unit of a dollar to illustrate the mechanics of betting, the basic requirement of de Finetti is that the betting be done in terms of any desirable monetary unit. A consequence is that probability is a unitless quantity. This feature is germane to the material of section 4.4.
Probabilities and odds on (odds against) are related. Specifically, when we say that the odds on (against) an eventarextoy, we are implying that we are willing to pay an amountxy now, in exchange for an amountx+yshouldoccur. The odds ofxtoyon are the same as the oddsytoxagainst. Thus, probabilities and odds are related in the sense that the odds against an eventare1−P/PandP=y/x+y. Consequently, we can change from odds to probability and vice versa (Lindley, 1985).
Conditional probabilities can also be made operational via the above betting scheme. Specif- ically, suppose that our knowledge of an eventwere to precede our knowledge of an event . Then the conditional probabilityPis the degree to which you currently believe inif in addition to you were also to learn thathas occurred. That is, it is the amount you are willing to pay now for a $1 ticket onright afterhappens, but with the provision that all bets are off ifdoes not happen. Finally, the notion of independent events can be easily explained in the context of bets. To say that any two eventsandare independent means that a knowledge of the occurrence or the non-occurrence ofwill not change our bets on, and
LAW OF TOTAL PROBABILITY AND BAYES’ LAW 19 vice versa. That is, the probability we would assign to the eventwill be the same, irrespective of us being informed of whetherhas occurred or not occurred.
Since there is no such thing as the right amount that one should put on the table, personal or subjective probabilities are not objective. However, employing personal probabilities requires one to be honest with oneself, in the sense that it takes a lot of self-discipline not to exaggerate the probabilities that one would have attached to any hypotheses before they were suggested.
Also, when we theorize about personal probabilities, we theorize about opinions generated by a ‘coherent person’, that is a person who does not allow book to be made against them. By making a book against a person we mean that if we offered this person various contingencies we can, by some sleight of hand, sell a bill of goods such that the person will be paying us money no matter what happens. A coherent person is one who declares personal probabilities that are consistent, and personal probabilities will be consistent if they are coherent, as defined at the end of section 2.2.1.
There are other aspects of personal probability that are important to note. The first is that personal probabilities, all of them, are always relative to one’s state of knowledge. That is, their specification at any timedepends on the background information that we have at. If were to be different, our specified personal probability is also likely to be different. When changes with the passage of time, either due to added knowledge or due to actual data, the personal probabilities may also change. Bayes’ law, which will be discussed later, gives us a prescription of precisely how to change our personal probability in light of new information to include new data. Contrast this to probabilities based on relative frequencies; they, being independent of minds and logic, are always absolute. The second point to note is that the notion of personal probability encompasses the logical probability notion of symmetry (or equally likely cases) considered by De Moivre and others. The reason is that symmetry is a judgment, and is therefore personal or subjective. It is physically impossible to make coins and dice that are perfectly symmetrical, so the judgment of symmetry is one of practical convenience, and thus personal. Furthermore, to say that something is symmetrical or equally likely implies that it is equally probable, and to define probability in terms of its likeliness could be viewed as circular reasoning. Thirdly, the notion of personal probability does not exclude from consideration information pertaining to relative frequency; it simply incorporates frequencies into the background information. Our final point pertains to the issue of consistency and coherence into the specification of personal probabilities.
This may be difficult to enforce in real life, and psychologists such as Tversky and Kahneman (1986) have given many examples showing that people do not conform to the rules of probability in their actual behavior. Consequently, some regard the theory of personal probability and the rules of probability (i.e. the Kolmogorov axioms) as being primarily a normative theory – that is, a theory which prescribes rules by which we ought to behave or strive to behave and not by which we actually behave. For a discourse on the elicitation of personal probabilities (Savage, 1971).
2.4 EXTENDING THE RULES OF PROBABILITY: LAW OF TOTAL