2.6 Value of information
2.6.1 Basic theory: Flipping coin example
Value of information, also known as value of clairvoyance, is commonly defined as the expected value that a decision maker is willing to pay for information which may change a decision (Howard, 1966).
It involves a comparison between the expected values of a decision policy without the information and if the information were to be obtained. It is usually based on the prior and posterior PDFs from Bayes’ theorem but where the conditioning information is uncertain and so a probabilistic prediction of it must also be employed.
In general, letGbe a gain function, which is a function of a decision policyπI with information I and a decision policy π without information I. Then the value of information (V oI) can be
written as:
V oI =E[G(πI)]−E[G(π)] = Z
E[G(πI)|I]p(I) dI−E[G(π)] (2.13) The main step in Equation 2.13 is to find the expected value ofGbased onπI by marginalizing the contribution of information I. To do so, some knowledge about the PDF of I is essential.
In other words, V oI is a prediction of the influence of information I on the final outcome of a decision (represented by the gain function G). The following simple example, a coin flipping gamble, demonstrates the concept.
Let there be a coin with unknown probability x of getting a head in a flip. You are about to bet c amount of money on getting a head in the next flip. If the next flip is indeed a head, you obtain a reward of r·c; or else, you get nothing back. Based on a decision policyπ0 that one will always make a bet, the expected gain G(π0) in this game, given x, is:
E[G(π0)|x] =r·c·x+ 0·(1−x)−c=c·(r·x−1) (2.14) Without any prior information onx, we assume a uniform prior PDF forxbetween 0 and 1. Hence, the prior expected gain is:
E[G(π0)] = Z
E[G(π0)|x]p(x) dx
= Z
c·(r·x−1)·1 dx=cr
2 −1 (2.15)
Now, before you decide to make the bet or not, you are offered a chance to know the result of a previous flip. How much would you pay for this piece of information I? How would you make the decision of betting or not? By Bayes’ theorem, the posterior PDF of x given the previous flip was a head (I =H) is:
p(x|I =H) = P(I =H|x)p(x)
P(I =H) = x·1 R1
0x·1 dx = 2x (2.16)
Similarly, the posterior PDF ofx given the previous flip was a tail (I =T) is:
p(x|I =T) = P(I =T|x)p(x)
P(I =T) = (1−x)·1 R1
0(1−x)·1 dx = 2(1−x) (2.17) The prior probabilities of heads and tails,P(I =H) andP(I =T), in the denominator of Equation
2.16 and 2.17 are both equal to 1/2 based on the total probability theorem. The two probability functions are also known as the evidence functions for the posterior PDF of x in the context of machine learning (these will be used when calculating V oI). The posterior of the expected gain G(π0) conditional on the possible information that you may get is:
E[G(π0)|I =H] = Z
E[G(π0)|x]p(x|I =H) dx
= Z
c·(r·x−1)·2xdx=c 2r
3 −1
(2.18) E[G(π0)|I =T] =
Z
E[G(π0)|x]p(x|I =T) dx
= Z
c·(r·x−1)·2(1−x) dx=c r
3−1
(2.19) At this stage, you do not know what information you will receive. Let us find the expected gain based on a new policy π, which will always decide on the bet in order to maximize the expected gain, and the V oI under four cases: (1)r ≤1.5; (2) 1.5 < r≤2; (3) 2< r <3; (4) r ≥3. LetπI
be the same decision policy, but also includes the possibility of having information I.
1. r≤1.5:
Based on Equation 2.15, since E[G(π0)] ≤ −0.25c < 0, you will not make the bet under policy π if no information is given. Hence, E[G(π)] = 0. Also, since E[G(π0)|I = H] ≤ 0 andE[G(π0)|I =T]≤ −0.5c <0, you will not make the bet no matter what information you receive under policyπI. Hence,E[G(πI)] = 0. Therefore,V oI =E[G(πI)]−E[G(π)] = 0.
2. 1.5< r≤2:
SinceE[G(π0)]≤0, you will not make the bet if no information is given. Hence,E[G(π)] = 0.
On the other hand, E[G(π0)|I =H]≥0, butE[G(π0)|I =T]<0. This means that you will make the bet ifI =H, but you will not make the bet ifI =T. Since you do not know which information you will receive, your expected gain is:
E[G(πI)] =E[G(πI)|I =H]P(I =H) + 0·P(I =T) =c r
3 −1 2
(2.20) Therefore, V oI =E[G(πI)]−E[G(π)] = c(r/3−1/2) >0. E[G(πI)] 6=E[G(π)] because it is conditional on you receiving the information, even though you do not know whether the
previous flip was a head or a tail.
3. 2< r <3:
Since E[G(π0)]>0, you will make the bet if no information is given. Hence, E[G(π)] takes the value in Equation 2.15. On the other hand, E[G(π0)|I = H] > 0, but E[G(π0)|I = T] < 0. This means that you will make the bet if I = H, but you will not make the bet if I = T. Similar to case 2, E[G(πI)] takes the value in Equation 2.20. Therefore, V oI =E[G(πI)]−E[G(π)] =c(1/2−r/6)>0.
4. r≥3:
Since E[G(π0)]>0, you will make the bet if no information is given. Hence, E[G(π)] takes the value in Equation 2.15. Also, since E[G(π0)|I =H] >0 and E[G(π0)|I =T] ≥0, you will make the bet no matter what information is given.
E[G(πI)] =E[G(πI)|I =H]P(I =H) +E[G(πI)|I =T]P(I =T) =cr 2−1
(2.21) Therefore, V oI =E[G(πI)]−E[G(π)] = 0.
Note that for both Case 1 and 4, V oI = 0. This is consistent with the definition because the information does not change the decision, and so the information has no added value. In contrast, for Case 2 and 3,V oI >0. Hence, you should be willing to pay for the information up to the value of V oI obtained from the calculation. To conclude, based on the theory of value of information, the decision to make a bet or not, depending on the return rater, will be:
1. r≤1.5: You should not buy the information and never make a bet.
2. 1.5< r ≤2: You should buy the information with the price up to V oI =c(r/3−1/2) and make a bet only if the previous flip was a head.
3. 2 < r < 3: You should buy the information with the price up to V oI = c(1/2−r/6) and make a bet only if the previous flip was a head.
4. r≥3: You should not buy the information and always make a bet.
Note that this conclusion is based on the chosen uniform prior on x. After the first bet, you will obtain the result of the flip. Then, a more appropriate betting strategy is to make the decision based on the posterior of x using the information that you have just obtained.