Chapter IV: Weak and Strong Ties in Social Network
B.1 Binary distributions
A p p e n d i x B
APPENDIX TO CHAPTER 3
Let us think in terms of the likelihood ratio. As we said before, ππ‘ is going to be multiplied byπ/(1βπ)πtimes and(1βπ)/π π times. These factors do not depend onππ‘, therefore it also does not matter in which order we observe these actions, the final likelihood is going to be equal to
ππ‘Β· π
1βπ π
1βπ π
π
=ππ‘Β· π
1βπ πβπ
.
Therefore, the public belief is a random walk on a fixed lattice and its position is defined by the difference in the number of times we observed signalsπ»ππ βandπΏ ππ€ andπ
0=1.
Proof of Lemma 11. First we show that once a we stop learning (no prices, ππ‘ =1) there is no reason to reenact πΏ π latter. Notice that from the social plannerβs perspective this game is stationary. Therefore, if it is optimal to stop at time π‘
0 at levelsπ andβπthen at any timeπ‘0> π‘
0there is no incentives to chooseππ‘different from 1 as we face the same problem as at timeπ‘
0when it is optimal to stop.
Second, suppose we choose to stop upon arrival at either levelβ > 0 orπ <0. Then the expected utility at 1/2 satisfies the following relation
π’(0.5) =ππ+E(πΏπ‘)π’(π) +πβ+E(πΏπ‘)π’(β),
whereπβandππcorrespond to the utility that we get on the way to the corresponding boundary and does not depend onπ’. Let us sum up the first two terms and the last two and choose the biggest one. If they are equal - pick one. Let us say it is the second one, that corresponds to level β. Then due to the symmetry of π’ it is profitable for the social planner to choose the lower level to beββinstead ofπ. This concludes the proof.
Proof of Lemma 12. Suppose at time π‘ the public belief ππ‘ > π and we are still acquiring information/in the learning period. Depending on the private signal we get, our posterior will be equal to
ππ‘ =


ο£²


ο£³
ππ‘π
ππ‘π+ (1βπ) (1β ππ‘) if we get the π»ππ βsignal ππ‘(1βπ)
ππ‘(1βπ) +π(1β ππ‘) if we get the πΏ ππ€signal.
Moreover, the former signal happens with probability (ππ‘π + (1 β ππ‘) (1 β π)) as with probability ππ‘ we are in the π»ππ β state and with (1β ππ‘) - in the πΏ ππ€ state. The analogous calculation gives that the latter signal occurs with probability ππ‘(1βπ) +π(1βππ‘). Also, remember when we get theπΏ ππ€private signal we take action 0, so our expected utility is 1β ππ‘. Therefore, the expected utility today is equal to
ππ‘π
(ππ‘π+ (1βπ) (1β ππ‘))(ππ‘π+ (1βπ) (1β ππ‘))+
+
1β ππ‘(1βπ)
(ππ‘(1βπ) +π(1βππ‘))
(ππ‘(1βπ) +π(1β ππ‘))=π . And similarly we get the same expected utility if ππ‘ < 1βπ.
Now we are ready to prove the main result.
Proof of Theorem 13. For notation convenience we write π’π instead ofπ’π»(π(π)). We are going to start with solving (3.6) and then explain whyπ’πΏ(0.5) from (3.7) is the same. We know that the solution for the recurrence equation
π’π =(1βπΏ)π+πΏ(ππ’π+
1+ (1βπ)π’πβ
1) is
π’π=π
1
1 2
1 πΏπ
β p
1β4π πΏ2+4π2πΏ2
πΏπ
! !π +π
2
1 2
1 πΏπ
+ p
1β4π πΏ2+4π2πΏ2
πΏπ
! !π +π, for some constants π
1 and π
2. It is also straightforward to verify that these π’πβs indeed satisfy our recurrence equation. Now we need to solve forπ
1 andπ
2using boundary conditionsπ’βπ =(1βπ)π/( (1βπ)π+ππ)andπ’π =ππ/( (1βπ)π+ππ). This results in two equations



ο£²


ο£³
(1βπ)π
ππ + (1βπ)π =π
1πβπ
1 +π
2πβπ
2 +π ππ
ππ + (1βπ)π =π
1ππ
1 +π
2ππ
2 +π where
ππ = 1 2
1 πΏπ
Β± p
1β4π πΏ2+4π2πΏ2
πΏπ
! ! .
Thus,



ο£²



ο£³ π1=
(1βπ)π
ππ + (1βπ)π βπ
ππ
1 βπ
2πβπ
2 ππ
1
π2
ππ
2 βπ2π
1 πβπ
2
= ππ
ππ+ (1βπ)π βπ+
πβ (1βπ)π ππ + (1βπ)π
π2π
1 . It follows that





ο£²




ο£³ π2=
ππ
ππ+(1βπ)π βπ
+
πβ (1βπ)
π
ππ+(1βπ)π
π2π
1
ππ
2 βπ2π
1
πβπ
2
π1=
β
ππ
ππ+(1βπ)π βπ
ππ
1
ππ
2 β
πβ (1βπ)
π
ππ+(1βπ)π
πβπ
2
ππ
1
ππ
2 βπ2π
1 πβπ
2
Now we can plug this into our solution π’0=π
1π0
1+π
2π0
2+π
=
β
ππ
ππ+(1βπ)π βπ
ππ
1
ππ
2 β
πβ (1βπ)
π
ππ+(1βπ)π
ππ
2
ππ
1
ππ
2 βπ2π
1 πβπ
2
+
+
ππ
ππ+(1βπ)π βπ
+
πβ (1βπ)
π
ππ+(1βπ)π
π2π
1
ππ
2 βπ2π
1 πβπ
2
= (ππ
2 βππ
1) π
π
ππ+(1βπ)π βπ
β
πβ (1βπ)
π
ππ+(1βπ)π
ππ
1
ππ
2
π2π
2 βπ2π 1
=
ππ
ππ+(1βπ)π βπ
β
πβ (1βπ)
π
ππ+(1βπ)π 1βπ
π
π ππ
2 +ππ
1
, where in the third equation we multiplied everything byπ
2 β 0 and in the last one we used the fact that
π1π
2 = 1 4
1
πΏ2π2 β 1β4π πΏ2+4π2πΏ2
π2πΏ2
= 1βπ π
Β·
What is left to explain is whyπ’πΏ(0.5) would be the same asπ’π»(0.5)which would save us time solving the second analogous system. Before, in state π»ππ β, we had a random walk with a drift π towards the boundary with higher utility, level π. When we condition on state beingπΏ ππ€ we have the same drift but in the opposite direction, towards levelβπ. But notice that utilities of levelβπ in stateπΏ ππ€ and
level π in state π»ππ β are equal to each other. The same is true for the other two boundary utilities. Moreover, in both states we have the same underlying lattice for the random walk. Therefore, problem (3.7) is the same problem as (3.6) up to renaming levels. This concludes the proof.
Proof of Lemma 14. This is a classic Gamblerβs ruin problem (Feller, 1968)
Proof of Proposition 15 . For notation simplicity we will write π’(π) instead of π’(π(π)).
We know, that the social planner adopts a symmetric strategy of stopping at levels πorβπ. Suppose that if we increase the stopping boundaries fromπ toπ+1 and fromβπtoβπβ1 correspondingly such that the utility at levelπ,π’(π), increases.
Notice that the utility at level 0, given this strategy, is equal to the utility at the boundary levels multiplied by the expected discount factor plus some utility that we collect on the way to it. The randomness comes from the hitting the boundary levels time. We divide this expression in two parts: for the levelπ andβπ.
π’(0.5) =πβπ +π’(βπ)Eβπ(πΏπ‘) +ππ+π’(π)Eπ(πΏπ‘),
whereπβπ,ππ are constantsEΒ±π(πΏπ‘) is the expected discounted factor until we hit the corresponding boundary.
Recall that π’ is symmetric and so π’(βπ) = π’(π), therefore if we increase π’(π) then we increaseπ’(0.5)also and vice versa. This concludes the proof.
Patient planner
Now we would like to see what happens to the optimalπβ and the expected utility, when the social planner becomes more patient. In other words, how does the optimal πβ and π’(0.5) behave, when πΏ β 1. We first establish a condition for when πβ does not go toβ whenπΏ β 1 in Proposition 16: precision also has to go to 1 at a certain speed in this case. Secondly, we look at our expected utility asπΏ increases, butπstays fixed in Proposition 17. There, we stumble upon (1βπ)/πfactor for the third time.
Proof of Proposition 16. Recall that the expected utility when we start with prior 1/2 (at levelπ) and stop the random walk upon arrival at levels 2π or 0 is
π’(π)=
β
β (1βπ)
π
(1βπ)π+ππ +π 1β
π π
π +
ππ
ππ+(1βπ)π βπ ππ
1 +ππ
2
+π, whereπ
1 < π
2and
ππ = 1 2
1 πΏπ
Β± s
1 (πΏπ)2 β 4
π +4
! .
In order to find bounds on optimal π β Rwe are going to focus on the first term ofπ’(π) as the second one is just a constant that does not affect optimalπ, and let π =1βπ, πΏ=1βπΎwhereπ, πΎ β0.
Notice, thatπ’(π)is single-peaked in π, therefore, ifπ maximizes it overRandπβ maximizes overNthen πβ is either dπe or bπc. Thus, bounds on π are going to give very tight bounds on πβ. First, considerππ:
ππ = 1 2
1Β±p
1β4πΏ2π+4π2πΏ2β4π πΏ+4π πΏ πΏπ
= 1 2
1Β±p
(2π πΏβ1)2+4π πΏ(1βπΏ) πΏπ
= 1 2
1Β± (2π πΏβ1+2π πΏπΎ π
1) +π(πΎ) πΏπ
, where the thirds equality comes from Taylor series expansion of
β
π2+π₯around 0.
Therefore,
π1= 1 πΏπ
β1βπ(πΎ) π2=1+π(πΎ) Now we can take derivative ofπ.
π0(π) =
(1βπ)πln(1βπ) ( (1βπ)π+ππ)β(1βπ)π( (1βπ)πln(1βπ)+ππlnπ) ( (1βπ)π+ππ)2
(1βπ)π
ππ +π
1
1βπ π
π ln1βππ
(ππ
1 +ππ
2)2 +
+
ππlnπ( (1βπ)π+ππ)βππ( (1βπ)πln(1βπ)+ππlnπ) ( (1βπ)π+ππ)2 (ππ
1 +ππ
2) β (ππ
1 lnπ
1+ππ
2 lnπ
2)π
2
(ππ
1 +ππ
2)2
,
where π β₯ π
1=
β (1βπ)π (1βπ)π +ππ
+π
β₯ 2πβ1 1βπ β₯ π
2=β
β (1βπ)π (1βπ)π+ππ
+π 1βπ π
π +
ππ
ππ + (1βπ)π βπ
β₯ 01
In order to satisfy F.O.C. nominator should be equal to 0
β
(1βπ)πππln1βππ ( (1βπ)π+ππ)2
(1βπ)π ππ
+π
1
1βπ π
π ln
π 1βπ
+
ππ(1βπ)πln1βππ ( (1βπ)π +ππ)2
! (ππ
1 +ππ
2)β
β (ππ
1 lnπ
1+ππ
2 lnπ
2)π
2=0
Notice that the left-hand side is smaller than
β€ π
1
1βπ π
π ln
π 1βπ
! (π
1ππ
2) β (ππ
2 ln(1+π(πΎ)))π
2
β€ π
1π
1
1βπ π
π ln
π 1βπ
! ππ
2 βππ
2πΎ π
2, whereππ > 1.
In order for this to be non negative we needπ to satisfy the following constraint
π β₯ ln
π2πΎ π1π
1ln(π/(1βπ)) ln1βπ
π
β₯ π
2lnπΎ lnπ
,
whereπ
2goes to 1 as πΎandπΏ go to 0. At the same time, notice that LHS is bigger than
β₯ π
1
1βπ π
π ln
π 1βπ
ππ
2 βππ
2πΎ π
3π
2, where π
3 > 1. In order for this to be non positive π should satisfy the following constraint
π β€ ln
πΎ π3π
2
π1ln(π/(1βπ)) ln1βπ
π
Β·
1Asπincreasesπ
1gets very close toπandπ
2to 1βπ.
Notice, that if(lnπ
3πΎ) is smaller or proportional to ln(βln( (1βπ)/π))then π is always finite and the lower bound is satisfied whenπ, πΏβ 1, as
ln
βln1β
π π
ln1βππ
β 0
Otherwise, we get that
π β€ lnπΎ π1lnπ
, for some constantπ
1. Moreover, ifπsatisfies these constraints then πβ satisfies lnπΎ
π1lnπ π
β₯ πβ β₯ π
2lnπΎ lnπ
. This concludes the proof.
Proof of Proposition 17. Recall that
π’(π) =
β
β (1βπ)
π
(1βπ)π+ππ +π 1βπ
π
π +
ππ
ππ+(1βπ)π βπ ππ
1 +ππ
2
+π . π1= 1
(1βπΎ)π
β1βπ(πΎ) π2=1+π(πΎ)
After plugging expressions forπ
1andπ
2inπ’(π)we get
π’(π) βπ =
β
β (1βπ)
π
(1βπ)π+ππ +π 1βπ
π
π +
ππ
ππ+(1βπ)π βπ 1βπ
π βπ(πΎ)π
+ (1+π(πΎ))π
=
β
β (1βπ)
π
(1βπ)π+ππ +π 1β
π π
π +
ππ
ππ+(1βπ)π βπ 1βπ
π
π
βπ(πΎ) +1+π(πΎ π)
=
β
β (1βπ)
π
(1βπ)π+ππ +π 1β
π π
π +
ππ
ππ+(1βπ)π βπ 1βπ
π
π +1
βπ(πΎ π).
From the proof of Proposition 16 we know that whenπis fixedπbehaves asπ(lnπΎ). Now, let us transform the first term into a more clear form
π’(π) βπ =
ππ
ππ + (1βπ)π βπ
β
ππβ(1βπ)π ππ+(1βπ)π
1βπ π
π 1βπ
π
π +1
βπ(πΎ π)
Asπ β βandπ > 1
2, ( (1βπ)/π)π β0. Suppose( (1βπ)/π)π =πthen ππ
ππ + (1βπ)π β₯ 1βπ
and
ππβ(1βπ)π ππ+(1βπ)π
1βπ π
π 1βπ
π
π +1
β€ π
π+1
β€ π . From the proof of Theorem 16π =π(πΎ). Hence,
π’(π) βπ β₯ 1βπβ2πβπ(πΎ π) =1βπβπ(πΎlnπΎ).
It means that asπ β β,π’(π)goes to its maximal value asπΆ( (1βπ)/π)πΒ·π β 0, which is quicker than ( (1βπ)/π)π π, for anyπ < 1. Also, as optimalπβincreases then absorbing beliefs are further away from 1/2.