Chapter IV: Weak and Strong Ties in Social Network
A.5 Distributions with polynomial tails
In this appendix we prove Theorem 6, showing that for private log-likelihood dis- tributions with polynomial tails, the expected time to learn is finite.
As in the setting of Theorem 6, assume that the conditional distributions of the private log-likelihood ratio satisfy
πΊ+(π₯) =1β π
π₯π for allπ₯ > π₯
0 (A.6)
πΊβ(π₯) = π
(βπ₯)π for allπ₯ <βπ₯
0 (A.7)
for someπ₯
0> 0.
We remind the reader that we denote byββ
π‘ the log-likelihood ratio of the public belief that results when the first π‘ β1 agents take action +1. It follows from Theorem 3 that in this setting, ββ
π‘ behaves asymptotically as π‘1/(π+1). Notice also that, by the symmetry of the model, the log-likelihood ratio of the public belief that results when the firstπ‘β1 agents take actionβ1 isβββ
π‘.
We begin with the simple observation that a strong enough bound on the probability of mistake is sufficient to show that the expected time to learn is finite. Formally, we have the following lemma. We remind the reader that P+(Β·) is shorthand for P(Β· |π=+1).
Lemma 36. Suppose there exist π, π > 0such that for all π‘ β₯ 1, P+(ππ‘ = β1) <
πΒ· 1
π‘2+π. ThenE+(ππΏ)is finite.
Proof. SinceππΏ =π‘ only ifππ‘β
1=β1,P+(ππΏ =π‘) β€ P+(ππ‘β
1=β1). Thus E+(ππΏ) =
β
Γ
π‘=1
π‘Β·P+(ππΏ =π‘)
β€P+(ππΏ =1) +
β
Γ
π‘=2
π‘Β·P+(ππ‘β
1 =β1)
β€1+π
β
Γ
π=2
π‘ (π‘β1)2+π
< β.
Accordingly, this section will be primarily devoted to studying the rate of decay of the probability of mistake,P+(ππ‘ =β1). In order to bound this probability, we will need to make use of the following lemmas, which give some control over how the public belief is updated following an upset.
Lemma 37. For πΊ+ and πΊβ as in (A.6) and (A.7), |βπ‘+
1| β€ |βπ‘| whenever |βπ‘| is sufficiently large andππ‘ β ππ‘+
1.
Proof. Assume without loss of generality thatππ‘ =+1 andππ‘+
1 =β1, so that βπ‘+
1=βπ‘ +π·β(βπ‘).
Thus, to prove the claim we compute a bound for π·β. To do so we first obtain a bound for the left tail ofπΊ+. By assumption, for π₯ > π₯
0 (with π₯
0 as in (A.6) and (A.7)),
πβ(βπ₯) =πΊ0β(βπ₯) = π π π₯π+1
and so by (A.1),
π+(βπ₯) =eβπ₯πβ(βπ₯) =π π eβπ₯ π₯π+1
. Hence,
πΊ+(βπ₯) =
β« βπ₯
ββ
π+(π)dπ =
β« βπ₯
ββ
π π eπ
(βπ)π+1dπ =π π
β« β π₯
πβπβ1eβπdπ . Forπsufficiently large, πβπβ1is at least, say, eβ.1π. Thus, forπ₯sufficiently large,
πΊ+(βπ₯) β₯π π
β« β π₯
eβ1.1πdπ = π π
1.1eβ1.1π₯.
It follows that forπ₯sufficiently large, π·β(π₯) =log
πΊ+(βπ₯) πΊβ(βπ₯) β₯ log
π π 1.1
β1.1π₯+πlogπ₯ β₯ β1.2π₯ . Thus, forβπ‘sufficiently large,
βπ‘+
1 =βπ‘+π·β(βπ‘) =βπ‘+log
πΊ+(ββπ‘)
πΊβ(ββπ‘) β₯ βπ‘+1.2(ββπ‘) =β.2βπ‘ so in particular,|βπ‘+
1| < |βπ‘|.
We will make use of the following lemma, which bounds the range of possible values thatβπ‘ can take.
Lemma 38. ForπΊ+ andπΊβ as in(A.6)and(A.7), there exists anπ >0such that for allπ‘ β₯ 0,|βπ | β€ πΒ·ββ
π‘ for allπ β€ π‘. Proof. For eachπ β₯ 0, define
ππ =max
|βπ| ββπ
where the maximum is taken over all outcomes. Note that there are at most 2π possible values for this expression, so ππ is well-defined and finite. Put
π =sup
πβ₯0
ππ.
To establish the claim, we must show thatπis finite. To do this, it suffices to show that forπsufficiently large, ππ+
1 β€ ππ.
Now, letπ’+(π₯) =π₯+π·+(π₯)andπ’β(π₯) =π₯+π·β(π₯). Then as shown in the section about the model, whenever agent π takes action +1, βπ+
1 = π’+(βπ), and whenever agentπtakes actionβ1,βπ+
1=π’β(βπ).
By Lemma 31, π’+ and π’β are eventually monotonic. Thus, there exists π₯
0 > 0 such thatπ’+ is monotone increasing on (π₯
0,β) andπ’β is monotone decreasing on (ββ,βπ₯
0).
For π sufficiently large, ββ
π > π₯
0. Further, it follows from Lemma 37 that for π sufficiently large, |βπ+
1|< |βπ|wheneverππ β ππ+
1and|βπ|> |ββ
π|. Let (ππ)be any sequence of actions with |ββπβ+1|
π+1
=ππ+
1. Ifππ β ππ+
1
ππ+
1= |βπ+
1| ββ
π+1
β€ |βπ| ββ
π
β€ ππ.
If ππ = ππ+
1, then either ππ+
1 = 1, in which case ππ+
1 β€ ππ, or ππ+
1 > 1.
If ππ+
1 > 1, then since |π·+| and |π·β| are decreasing on (π₯
0,β) and (ββ,βπ₯
0) respectively,|βπ+
1ββπ|/|βπ| β€ |ββ
π+1βββ
π|/|ββ
π|. So ππ+
1= |βπ+
1| ββ
π+1
= |βπ| + |βπ+
1ββπ| ββ
π+ |ββ
π+1βββ
π| where the second equality follows from the fact thatβπandβπ+
1have the same sign.
Finally,
ππ+
1= |βπ| βπβ
Β· 1+ |βπ+
1ββπ|/|βπ| 1+ |ββ
π+1ββπβ|/ββπ
β€ |βπ| βπβ
β€ ππ.
Thus, for all sufficiently largeπ,ππ+
1β€ ππ.
Proposition 39. There existsπ > 0such thatP+(ππ‘ =β1) < π π‘β2.1for allπ‘ >0.
Proof. Letπ½=β2.1/logπΎ, whereπΎis as in Proposition 7. To carry out our analysis, we will divide the event thatππ‘ = β1 into three disjoint events and bound each of them separately:
π΄= (ππ‘ =β1) and(Ξπ‘ > π½logπ‘)
π΅1= (ππ‘ =β1) and(Ξπ‘ β€ π½logπ‘)and (|{π : π < π‘ , ππ =+1}| β₯ 1 2
π‘) π΅2= (ππ‘ =β1) and(Ξπ‘ β€ π½logπ‘)and (|{π : π < π‘ , ππ =+1}| < 1
2 π‘). First, by Corollary 34 we have a bound forP+(π΄)
P+(π΄) β€πΒ· 1 π‘2.1
. Next, we boundP+(π΅
1). This is the event that the number of upsets so far is small and the majority of agents so far have taken the correct action.
Since there are at mostπ½logπ‘upsets, there are at most12π½logπ‘maximal good runs.
Since, furthermore, there are at least 12π‘ agents who take action+1, there is at least one maximal good run of length at leastπ‘/(π½logπ‘).
Thus, P+(π΅
1) is bounded from above by the probability that there are some π
1 <
π 2 < π‘ such that there is a good run of length π
2β π
1 β₯ π‘/(π½logπ‘) from π
1 and ππ
2 =β1.
For fixedπ
1, π
2, denote byπΈπ
1,π
2 the event that there is a good run of lengthπ
2βπ
1
fromπ
1. Denote byΞπ
1,π
2 the event (πΈπ
1,π
2, ππ
2 =β1). Then P+(Ξπ
1,π
2) =P+(ππ
2 =β1|πΈπ
1,π
2) Β·P+(πΈπ
1,π
2)
β€ P+(ππ
2 =β1|πΈπ
1,π
2). By Lemma 35, there exists a π§ > 0 such that πΈπ
1,π
2 implies that βπ
2 β₯ ββ
π 2βπ
1βπ§. Therefore,
P+(Ξπ
1,π
2) β€πΊ+(βββ
π 2βπ 1βπ§). Since forπ‘sufficiently largeββ
π‘ > π‘
1
π+2 and sinceπΊ+(βπ₯) β€eβπ₯by (A.1), P+(Ξπ
1,π
2) β€ eβπΌ(π 2βπ 1βπ§)
π+12 β€ eβπΌ(π‘/(π½logπ‘)βπ§)
π+12
.
To simplify, we further bound this last expression to arrive at, for someπ > 0, P+(Ξπ
1,π
2) β€ πeβπ‘
π+13
for allπ‘. Sinceπ΅
1is covered by fewer thanπ‘2events of the formΞπ
1,π
2 (asπ
1andπ
2
are less thanπ‘), it follows that P+(π΅
1) < ππ‘2eβπ‘
1 π+3
< 1 π‘2.1
for allπ‘ large enough.
Finally we boundP+(π΅
2). This is the event that the number of upsets so far is small and the majority of agents so far have taken the wrong action. As in π΅
1, there is a maximal bad run of length at leastπ‘/(π½log(π‘)).
Denote byπ the event that there is at least one bad run of lengthπ‘/(π½log(π‘))before timeπ‘ and by π π the event that agents π throughπ +π‘/(π½logπ‘) β1 take actionβ1.
Since π΅
2is contained in π , and since π is contained in the unionβͺπ‘
π =1π π , we have that
P+(π΅
2) β€P+(π ) β€
π‘
Γ
π =1
P+(π π ).
Taking the maximum of all the addends in the right hand side, we can further bound the probability ofπ΅
2:
P+(π΅
2) β€ π‘Β· max
1β€π β€π‘P+(π π ).
Conditioned onβπ , the probability of π π is P+(π π |βπ )=
π +π‘/(π½logπ‘)β1
Γ
π=π
πΊ+(ββπ). By Lemma 38, there exists π > 0 such that |βπ| β€ π ββ
π‘, for allπ β€ π‘. Therefore, sinceπΊ+ is monotone,
P+(π π ) β€πΊ+(π ββ
π‘)π‘/(π½logπ‘). It follows that
P+(π΅
2) β€π‘Β·πΊ+(π ββ
π‘)π‘/(π½logπ‘). SinceπΊ+(π₯) =1βπΒ·π₯βπ forπ₯large enough, and sinceββ
π‘ is asymptotically at most π‘1/(π+0.5), we have that
logπΊ+(π ββ
π‘) β€ βπ πβπΒ·π‘βπ/(π+0.5). Thus
P+(π΅
2) β€ π‘Β·exp
βπ πβπ Β·π‘1/(2π+1)/(π½logπ‘)
β€ π‘β2.1,
for allπ‘ large enough. This concludes the proof, because P+(ππ‘ = β1) = P+(π΄) + P+(π΅
1) +P+(π΅
2) β€ π 1
π‘2.1 for some constantπ .
Given this bound on the probability of mistakes, the proof of the main theorem of this section follows easily from Lemma 36.
Proof of Theorem 6. By Proposition 39, there existsπ > 0 such thatP(ππ‘ =β1|π= +1) < π 1
π‘2.1 for allπ‘ β₯ 1. Hence, by Lemma 36E(ππΏ |π =+1) < β. By a symmetric argument the same holds conditioned onπ = β1. Thus, the expected time to learn is finite.
A p p e n d i x B