• Tidak ada hasil yang ditemukan

Distributions with polynomial tails

Dalam dokumen Essays on Social Learning and Networks (Halaman 86-92)

Chapter IV: Weak and Strong Ties in Social Network

A.5 Distributions with polynomial tails

In this appendix we prove Theorem 6, showing that for private log-likelihood dis- tributions with polynomial tails, the expected time to learn is finite.

As in the setting of Theorem 6, assume that the conditional distributions of the private log-likelihood ratio satisfy

𝐺+(π‘₯) =1βˆ’ 𝑐

π‘₯π‘˜ for allπ‘₯ > π‘₯

0 (A.6)

πΊβˆ’(π‘₯) = 𝑐

(βˆ’π‘₯)π‘˜ for allπ‘₯ <βˆ’π‘₯

0 (A.7)

for someπ‘₯

0> 0.

We remind the reader that we denote byβ„“βˆ—

𝑑 the log-likelihood ratio of the public belief that results when the first 𝑑 βˆ’1 agents take action +1. It follows from Theorem 3 that in this setting, β„“βˆ—

𝑑 behaves asymptotically as 𝑑1/(π‘˜+1). Notice also that, by the symmetry of the model, the log-likelihood ratio of the public belief that results when the firstπ‘‘βˆ’1 agents take actionβˆ’1 isβˆ’β„“βˆ—

𝑑.

We begin with the simple observation that a strong enough bound on the probability of mistake is sufficient to show that the expected time to learn is finite. Formally, we have the following lemma. We remind the reader that P+(Β·) is shorthand for P(Β· |πœƒ=+1).

Lemma 36. Suppose there exist π‘˜, πœ€ > 0such that for all 𝑑 β‰₯ 1, P+(π‘Žπ‘‘ = βˆ’1) <

π‘˜Β· 1

𝑑2+πœ€. ThenE+(𝑇𝐿)is finite.

Proof. Since𝑇𝐿 =𝑑 only ifπ‘Žπ‘‘βˆ’

1=βˆ’1,P+(𝑇𝐿 =𝑑) ≀ P+(π‘Žπ‘‘βˆ’

1=βˆ’1). Thus E+(𝑇𝐿) =

∞

Γ•

𝑑=1

𝑑·P+(𝑇𝐿 =𝑑)

≀P+(𝑇𝐿 =1) +

∞

Γ•

𝑑=2

𝑑·P+(π‘Žπ‘‘βˆ’

1 =βˆ’1)

≀1+π‘˜

∞

Γ•

𝑖=2

𝑑 (π‘‘βˆ’1)2+πœ€

< ∞.

Accordingly, this section will be primarily devoted to studying the rate of decay of the probability of mistake,P+(π‘Žπ‘‘ =βˆ’1). In order to bound this probability, we will need to make use of the following lemmas, which give some control over how the public belief is updated following an upset.

Lemma 37. For 𝐺+ and πΊβˆ’ as in (A.6) and (A.7), |ℓ𝑑+

1| ≀ |ℓ𝑑| whenever |ℓ𝑑| is sufficiently large andπ‘Žπ‘‘ β‰ π‘Žπ‘‘+

1.

Proof. Assume without loss of generality thatπ‘Žπ‘‘ =+1 andπ‘Žπ‘‘+

1 =βˆ’1, so that ℓ𝑑+

1=ℓ𝑑 +π·βˆ’(ℓ𝑑).

Thus, to prove the claim we compute a bound for π·βˆ’. To do so we first obtain a bound for the left tail of𝐺+. By assumption, for π‘₯ > π‘₯

0 (with π‘₯

0 as in (A.6) and (A.7)),

π‘”βˆ’(βˆ’π‘₯) =𝐺0βˆ’(βˆ’π‘₯) = 𝑐 π‘˜ π‘₯π‘˜+1

and so by (A.1),

𝑔+(βˆ’π‘₯) =eβˆ’π‘₯π‘”βˆ’(βˆ’π‘₯) =𝑐 π‘˜ eβˆ’π‘₯ π‘₯π‘˜+1

. Hence,

𝐺+(βˆ’π‘₯) =

∫ βˆ’π‘₯

βˆ’βˆž

𝑔+(𝜁)d𝜁 =

∫ βˆ’π‘₯

βˆ’βˆž

𝑐 π‘˜ e𝜁

(βˆ’πœ)π‘˜+1d𝜁 =𝑐 π‘˜

∫ ∞ π‘₯

πœβˆ’π‘˜βˆ’1eβˆ’πœd𝜁 . For𝜁sufficiently large, πœβˆ’π‘˜βˆ’1is at least, say, eβˆ’.1𝜁. Thus, forπ‘₯sufficiently large,

𝐺+(βˆ’π‘₯) β‰₯𝑐 π‘˜

∫ ∞ π‘₯

eβˆ’1.1𝜁d𝜁 = 𝑐 π‘˜

1.1eβˆ’1.1π‘₯.

It follows that forπ‘₯sufficiently large, π·βˆ’(π‘₯) =log

𝐺+(βˆ’π‘₯) πΊβˆ’(βˆ’π‘₯) β‰₯ log

𝑐 π‘˜ 1.1

βˆ’1.1π‘₯+π‘˜logπ‘₯ β‰₯ βˆ’1.2π‘₯ . Thus, forℓ𝑑sufficiently large,

ℓ𝑑+

1 =ℓ𝑑+π·βˆ’(ℓ𝑑) =ℓ𝑑+log

𝐺+(βˆ’β„“π‘‘)

πΊβˆ’(βˆ’β„“π‘‘) β‰₯ ℓ𝑑+1.2(βˆ’β„“π‘‘) =βˆ’.2ℓ𝑑 so in particular,|ℓ𝑑+

1| < |ℓ𝑑|.

We will make use of the following lemma, which bounds the range of possible values thatℓ𝑑 can take.

Lemma 38. For𝐺+ andπΊβˆ’ as in(A.6)and(A.7), there exists an𝑀 >0such that for all𝑑 β‰₯ 0,|ℓ𝑠| ≀ π‘€Β·β„“βˆ—

𝑑 for all𝑠 ≀ 𝑑. Proof. For each𝜏 β‰₯ 0, define

π‘€πœ =max

|β„“πœ| β„“βˆ—πœ

where the maximum is taken over all outcomes. Note that there are at most 2𝜏 possible values for this expression, so π‘€πœ is well-defined and finite. Put

𝑀 =sup

𝜏β‰₯0

π‘€πœ.

To establish the claim, we must show that𝑀is finite. To do this, it suffices to show that for𝜏sufficiently large, π‘€πœ+

1 ≀ π‘€πœ.

Now, let𝑒+(π‘₯) =π‘₯+𝐷+(π‘₯)andπ‘’βˆ’(π‘₯) =π‘₯+π·βˆ’(π‘₯). Then as shown in the section about the model, whenever agent 𝜏 takes action +1, β„“πœ+

1 = 𝑒+(β„“πœ), and whenever agent𝜏takes actionβˆ’1,β„“πœ+

1=π‘’βˆ’(β„“πœ).

By Lemma 31, 𝑒+ and π‘’βˆ’ are eventually monotonic. Thus, there exists π‘₯

0 > 0 such that𝑒+ is monotone increasing on (π‘₯

0,∞) andπ‘’βˆ’ is monotone decreasing on (βˆ’βˆž,βˆ’π‘₯

0).

For 𝜏 sufficiently large, β„“βˆ—

𝜏 > π‘₯

0. Further, it follows from Lemma 37 that for 𝜏 sufficiently large, |β„“πœ+

1|< |β„“πœ|wheneverπ‘Žπœ β‰  π‘Žπœ+

1and|β„“πœ|> |β„“βˆ—

𝜏|. Let (π‘Žπœ)be any sequence of actions with |β„“β„“πœβˆ—+1|

𝜏+1

=π‘€πœ+

1. Ifπ‘Žπœ β‰ π‘Žπœ+

1

π‘€πœ+

1= |β„“πœ+

1| β„“βˆ—

𝜏+1

≀ |β„“πœ| β„“βˆ—

𝜏

≀ π‘€πœ.

If π‘Žπœ = π‘Žπœ+

1, then either π‘€πœ+

1 = 1, in which case π‘€πœ+

1 ≀ π‘€πœ, or π‘€πœ+

1 > 1.

If π‘€πœ+

1 > 1, then since |𝐷+| and |π·βˆ’| are decreasing on (π‘₯

0,∞) and (βˆ’βˆž,βˆ’π‘₯

0) respectively,|β„“πœ+

1βˆ’β„“πœ|/|β„“πœ| ≀ |β„“βˆ—

𝜏+1βˆ’β„“βˆ—

𝜏|/|β„“βˆ—

𝜏|. So π‘€πœ+

1= |β„“πœ+

1| β„“βˆ—

𝜏+1

= |β„“πœ| + |β„“πœ+

1βˆ’β„“πœ| β„“βˆ—

𝜏+ |β„“βˆ—

𝜏+1βˆ’β„“βˆ—

𝜏| where the second equality follows from the fact thatβ„“πœandβ„“πœ+

1have the same sign.

Finally,

π‘€πœ+

1= |β„“πœ| β„“πœβˆ—

Β· 1+ |β„“πœ+

1βˆ’β„“πœ|/|β„“πœ| 1+ |β„“βˆ—

𝜏+1βˆ’β„“πœβˆ—|/β„“βˆ—πœ

≀ |β„“πœ| β„“πœβˆ—

≀ π‘€πœ.

Thus, for all sufficiently large𝜏,π‘€πœ+

1≀ π‘€πœ.

Proposition 39. There existsπœ… > 0such thatP+(π‘Žπ‘‘ =βˆ’1) < πœ…π‘‘βˆ’2.1for all𝑑 >0.

Proof. Let𝛽=βˆ’2.1/log𝛾, where𝛾is as in Proposition 7. To carry out our analysis, we will divide the event thatπ‘Žπ‘‘ = βˆ’1 into three disjoint events and bound each of them separately:

𝐴= (π‘Žπ‘‘ =βˆ’1) and(Ξžπ‘‘ > 𝛽log𝑑)

𝐡1= (π‘Žπ‘‘ =βˆ’1) and(Ξžπ‘‘ ≀ 𝛽log𝑑)and (|{𝑠: 𝑠 < 𝑑 , π‘Žπ‘  =+1}| β‰₯ 1 2

𝑑) 𝐡2= (π‘Žπ‘‘ =βˆ’1) and(Ξžπ‘‘ ≀ 𝛽log𝑑)and (|{𝑠: 𝑠 < 𝑑 , π‘Žπ‘  =+1}| < 1

2 𝑑). First, by Corollary 34 we have a bound forP+(𝐴)

P+(𝐴) ≀𝑐· 1 𝑑2.1

. Next, we boundP+(𝐡

1). This is the event that the number of upsets so far is small and the majority of agents so far have taken the correct action.

Since there are at most𝛽log𝑑upsets, there are at most12𝛽log𝑑maximal good runs.

Since, furthermore, there are at least 12𝑑 agents who take action+1, there is at least one maximal good run of length at least𝑑/(𝛽log𝑑).

Thus, P+(𝐡

1) is bounded from above by the probability that there are some 𝑠

1 <

𝑠2 < 𝑑 such that there is a good run of length 𝑠

2βˆ’ 𝑠

1 β‰₯ 𝑑/(𝛽log𝑑) from 𝑠

1 and π‘Žπ‘ 

2 =βˆ’1.

For fixed𝑠

1, 𝑠

2, denote by𝐸𝑠

1,𝑠

2 the event that there is a good run of length𝑠

2βˆ’π‘ 

1

from𝑠

1. Denote byΓ𝑠

1,𝑠

2 the event (𝐸𝑠

1,𝑠

2, π‘Žπ‘ 

2 =βˆ’1). Then P+(Γ𝑠

1,𝑠

2) =P+(π‘Žπ‘ 

2 =βˆ’1|𝐸𝑠

1,𝑠

2) Β·P+(𝐸𝑠

1,𝑠

2)

≀ P+(π‘Žπ‘ 

2 =βˆ’1|𝐸𝑠

1,𝑠

2). By Lemma 35, there exists a 𝑧 > 0 such that 𝐸𝑠

1,𝑠

2 implies that ℓ𝑠

2 β‰₯ β„“βˆ—

𝑠2βˆ’π‘ 

1βˆ’π‘§. Therefore,

P+(Γ𝑠

1,𝑠

2) ≀𝐺+(βˆ’β„“βˆ—

𝑠2βˆ’π‘ 1βˆ’π‘§). Since for𝑑sufficiently largeβ„“βˆ—

𝑑 > 𝑑

1

π‘˜+2 and since𝐺+(βˆ’π‘₯) ≀eβˆ’π‘₯by (A.1), P+(Γ𝑠

1,𝑠

2) ≀ eβˆ’π›Ό(𝑠2βˆ’π‘ 1βˆ’π‘§)

π‘˜+12 ≀ eβˆ’π›Ό(𝑑/(𝛽log𝑑)βˆ’π‘§)

π‘˜+12

.

To simplify, we further bound this last expression to arrive at, for some𝑐 > 0, P+(Γ𝑠

1,𝑠

2) ≀ 𝑐eβˆ’π‘‘

π‘˜+13

for all𝑑. Since𝐡

1is covered by fewer than𝑑2events of the formΓ𝑠

1,𝑠

2 (as𝑠

1and𝑠

2

are less than𝑑), it follows that P+(𝐡

1) < 𝑐𝑑2eβˆ’π‘‘

1 π‘˜+3

< 1 𝑑2.1

for all𝑑 large enough.

Finally we boundP+(𝐡

2). This is the event that the number of upsets so far is small and the majority of agents so far have taken the wrong action. As in 𝐡

1, there is a maximal bad run of length at least𝑑/(𝛽log(𝑑)).

Denote by𝑅the event that there is at least one bad run of length𝑑/(𝛽log(𝑑))before time𝑑 and by 𝑅𝑠 the event that agents 𝑠through𝑠+𝑑/(𝛽log𝑑) βˆ’1 take actionβˆ’1.

Since 𝐡

2is contained in 𝑅, and since 𝑅 is contained in the unionβˆͺ𝑑

𝑠=1𝑅𝑠, we have that

P+(𝐡

2) ≀P+(𝑅) ≀

𝑑

Γ•

𝑠=1

P+(𝑅𝑠).

Taking the maximum of all the addends in the right hand side, we can further bound the probability of𝐡

2:

P+(𝐡

2) ≀ 𝑑· max

1≀𝑠≀𝑑P+(𝑅𝑠).

Conditioned onℓ𝑠, the probability of 𝑅𝑠 is P+(𝑅𝑠|ℓ𝑠)=

𝑠+𝑑/(𝛽log𝑑)βˆ’1

Γ–

π‘Ÿ=𝑠

𝐺+(βˆ’β„“π‘Ÿ). By Lemma 38, there exists 𝑀 > 0 such that |β„“π‘Ÿ| ≀ 𝑀 β„“βˆ—

𝑑, for allπ‘Ÿ ≀ 𝑑. Therefore, since𝐺+ is monotone,

P+(𝑅𝑠) ≀𝐺+(𝑀 β„“βˆ—

𝑑)𝑑/(𝛽log𝑑). It follows that

P+(𝐡

2) ≀𝑑·𝐺+(𝑀 β„“βˆ—

𝑑)𝑑/(𝛽log𝑑). Since𝐺+(π‘₯) =1βˆ’π‘Β·π‘₯βˆ’π‘˜ forπ‘₯large enough, and sinceβ„“βˆ—

𝑑 is asymptotically at most 𝑑1/(π‘˜+0.5), we have that

log𝐺+(𝑀 β„“βˆ—

𝑑) ≀ βˆ’π‘ π‘€βˆ’π‘˜Β·π‘‘βˆ’π‘˜/(π‘˜+0.5). Thus

P+(𝐡

2) ≀ 𝑑·exp

βˆ’π‘ π‘€βˆ’π‘˜ ·𝑑1/(2π‘˜+1)/(𝛽log𝑑)

≀ π‘‘βˆ’2.1,

for all𝑑 large enough. This concludes the proof, because P+(π‘Žπ‘‘ = βˆ’1) = P+(𝐴) + P+(𝐡

1) +P+(𝐡

2) ≀ πœ… 1

𝑑2.1 for some constantπœ….

Given this bound on the probability of mistakes, the proof of the main theorem of this section follows easily from Lemma 36.

Proof of Theorem 6. By Proposition 39, there existsπœ… > 0 such thatP(π‘Žπ‘‘ =βˆ’1|πœƒ= +1) < πœ… 1

𝑑2.1 for all𝑑 β‰₯ 1. Hence, by Lemma 36E(𝑇𝐿 |πœƒ =+1) < ∞. By a symmetric argument the same holds conditioned onπœƒ = βˆ’1. Thus, the expected time to learn is finite.

A p p e n d i x B

APPENDIX TO CHAPTER 3

Dalam dokumen Essays on Social Learning and Networks (Halaman 86-92)