• Tidak ada hasil yang ditemukan

Binary distributions

Dalam dokumen Essays on Social Learning and Networks (Halaman 92-100)

Chapter IV: Weak and Strong Ties in Social Network

B.1 Binary distributions

A p p e n d i x B

APPENDIX TO CHAPTER 3

Let us think in terms of the likelihood ratio. As we said before, 𝑙𝑑 is going to be multiplied byπ‘ž/(1βˆ’π‘ž)𝑛times and(1βˆ’π‘ž)/π‘ž π‘˜ times. These factors do not depend on𝑙𝑑, therefore it also does not matter in which order we observe these actions, the final likelihood is going to be equal to

𝑙𝑑· π‘ž

1βˆ’π‘ž 𝑛

1βˆ’π‘ž π‘ž

π‘˜

=𝑙𝑑· π‘ž

1βˆ’π‘ž π‘›βˆ’π‘˜

.

Therefore, the public belief is a random walk on a fixed lattice and its position is defined by the difference in the number of times we observed signals𝐻𝑖𝑔 β„Žand𝐿 π‘œπ‘€ and𝑙

0=1.

Proof of Lemma 11. First we show that once a we stop learning (no prices, π‘˜π‘‘ =1) there is no reason to reenact 𝐿 𝑃 latter. Notice that from the social planner’s perspective this game is stationary. Therefore, if it is optimal to stop at time 𝑑

0 at levels𝑁 andβˆ’π‘then at any time𝑑0> 𝑑

0there is no incentives to chooseπ‘˜π‘‘different from 1 as we face the same problem as at time𝑑

0when it is optimal to stop.

Second, suppose we choose to stop upon arrival at either levelβ„Ž > 0 or𝑙 <0. Then the expected utility at 1/2 satisfies the following relation

𝑒(0.5) =π‘Žπ‘™+E(𝛿𝑑)𝑒(𝑙) +π‘Žβ„Ž+E(𝛿𝑑)𝑒(β„Ž),

whereπ‘Žβ„Žandπ‘Žπ‘™correspond to the utility that we get on the way to the corresponding boundary and does not depend on𝑒. Let us sum up the first two terms and the last two and choose the biggest one. If they are equal - pick one. Let us say it is the second one, that corresponds to level β„Ž. Then due to the symmetry of 𝑒 it is profitable for the social planner to choose the lower level to beβˆ’β„Žinstead of𝑙. This concludes the proof.

Proof of Lemma 12. Suppose at time 𝑑 the public belief 𝑝𝑑 > π‘ž and we are still acquiring information/in the learning period. Depending on the private signal we get, our posterior will be equal to

πœ‡π‘‘ =





ο£²





ο£³

π‘π‘‘π‘ž

π‘π‘‘π‘ž+ (1βˆ’π‘ž) (1βˆ’ 𝑝𝑑) if we get the 𝐻𝑖𝑔 β„Žsignal 𝑝𝑑(1βˆ’π‘ž)

𝑝𝑑(1βˆ’π‘ž) +π‘ž(1βˆ’ 𝑝𝑑) if we get the 𝐿 π‘œπ‘€signal.

Moreover, the former signal happens with probability (π‘π‘‘π‘ž + (1 βˆ’ 𝑝𝑑) (1 βˆ’ π‘ž)) as with probability 𝑝𝑑 we are in the 𝐻𝑖𝑔 β„Ž state and with (1βˆ’ 𝑝𝑑) - in the 𝐿 π‘œπ‘€ state. The analogous calculation gives that the latter signal occurs with probability 𝑝𝑑(1βˆ’π‘ž) +π‘ž(1βˆ’π‘π‘‘). Also, remember when we get the𝐿 π‘œπ‘€private signal we take action 0, so our expected utility is 1βˆ’ πœ‡π‘‘. Therefore, the expected utility today is equal to

π‘π‘‘π‘ž

(π‘π‘‘π‘ž+ (1βˆ’π‘ž) (1βˆ’ 𝑝𝑑))(π‘π‘‘π‘ž+ (1βˆ’π‘ž) (1βˆ’ 𝑝𝑑))+

+

1βˆ’ 𝑝𝑑(1βˆ’π‘ž)

(𝑝𝑑(1βˆ’π‘ž) +π‘ž(1βˆ’π‘π‘‘))

(𝑝𝑑(1βˆ’π‘ž) +π‘ž(1βˆ’ 𝑝𝑑))=π‘ž . And similarly we get the same expected utility if 𝑝𝑑 < 1βˆ’π‘ž.

Now we are ready to prove the main result.

Proof of Theorem 13. For notation convenience we write 𝑒𝑛 instead of𝑒𝐻(𝑝(𝑛)). We are going to start with solving (3.6) and then explain why𝑒𝐿(0.5) from (3.7) is the same. We know that the solution for the recurrence equation

𝑒𝑛 =(1βˆ’π›Ώ)π‘ž+𝛿(π‘žπ‘’π‘›+

1+ (1βˆ’π‘ž)π‘’π‘›βˆ’

1) is

𝑒𝑛=𝑐

1

1 2

1 π›Ώπ‘ž

βˆ’ p

1βˆ’4π‘ž 𝛿2+4π‘ž2𝛿2

π›Ώπ‘ž

! !𝑛 +𝑐

2

1 2

1 π›Ώπ‘ž

+ p

1βˆ’4π‘ž 𝛿2+4π‘ž2𝛿2

π›Ώπ‘ž

! !𝑛 +π‘ž, for some constants 𝑐

1 and 𝑐

2. It is also straightforward to verify that these 𝑒𝑛’s indeed satisfy our recurrence equation. Now we need to solve for𝑐

1 and𝑐

2using boundary conditionsπ‘’βˆ’π‘ =(1βˆ’π‘ž)𝑁/( (1βˆ’π‘ž)𝑁+π‘žπ‘)and𝑒𝑁 =π‘žπ‘/( (1βˆ’π‘ž)𝑁+π‘žπ‘). This results in two equations







ο£²





ο£³

(1βˆ’π‘ž)𝑁

π‘žπ‘ + (1βˆ’π‘ž)𝑁 =𝑐

1π‘Žβˆ’π‘

1 +𝑐

2π‘Žβˆ’π‘

2 +π‘ž π‘žπ‘

π‘žπ‘ + (1βˆ’π‘ž)𝑁 =𝑐

1π‘Žπ‘

1 +𝑐

2π‘Žπ‘

2 +π‘ž where

π‘Žπ‘– = 1 2

1 π›Ώπ‘ž

Β± p

1βˆ’4π‘ž 𝛿2+4π‘ž2𝛿2

π›Ώπ‘ž

! ! .

Thus,







ο£²







ο£³ 𝑐1=

(1βˆ’π‘ž)𝑁

π‘žπ‘ + (1βˆ’π‘ž)𝑁 βˆ’π‘ž

π‘Žπ‘

1 βˆ’π‘

2π‘Žβˆ’π‘

2 π‘Žπ‘

1

𝑐2

π‘Žπ‘

2 βˆ’π‘Ž2𝑁

1 π‘Žβˆ’π‘

2

= π‘žπ‘

π‘žπ‘+ (1βˆ’π‘ž)𝑁 βˆ’π‘ž+

π‘žβˆ’ (1βˆ’π‘ž)𝑁 π‘žπ‘ + (1βˆ’π‘ž)𝑁

π‘Ž2𝑁

1 . It follows that











ο£²









ο£³ 𝑐2=

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž

+

π‘žβˆ’ (1βˆ’π‘ž)

𝑁

π‘žπ‘+(1βˆ’π‘ž)𝑁

π‘Ž2𝑁

1

π‘Žπ‘

2 βˆ’π‘Ž2𝑁

1

π‘Žβˆ’π‘

2

𝑐1=

βˆ’

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž

π‘Žπ‘

1

π‘Žπ‘

2 βˆ’

π‘žβˆ’ (1βˆ’π‘ž)

𝑁

π‘žπ‘+(1βˆ’π‘ž)𝑁

π‘Žβˆ’π‘

2

π‘Žπ‘

1

π‘Žπ‘

2 βˆ’π‘Ž2𝑁

1 π‘Žβˆ’π‘

2

Now we can plug this into our solution 𝑒0=𝑐

1π‘Ž0

1+𝑐

2π‘Ž0

2+π‘ž

=

βˆ’

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž

π‘Žπ‘

1

π‘Žπ‘

2 βˆ’

π‘žβˆ’ (1βˆ’π‘ž)

𝑁

π‘žπ‘+(1βˆ’π‘ž)𝑁

π‘Žπ‘

2

π‘Žπ‘

1

π‘Žπ‘

2 βˆ’π‘Ž2𝑁

1 π‘Žβˆ’π‘

2

+

+

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž

+

π‘žβˆ’ (1βˆ’π‘ž)

𝑁

π‘žπ‘+(1βˆ’π‘ž)𝑁

π‘Ž2𝑁

1

π‘Žπ‘

2 βˆ’π‘Ž2𝑁

1 π‘Žβˆ’π‘

2

= (π‘Žπ‘

2 βˆ’π‘Žπ‘

1) π‘ž

𝑁

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž

βˆ’

π‘žβˆ’ (1βˆ’π‘ž)

𝑁

π‘žπ‘+(1βˆ’π‘ž)𝑁

π‘Žπ‘

1

π‘Žπ‘

2

π‘Ž2𝑁

2 βˆ’π‘Ž2𝑁 1

=

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž

βˆ’

π‘žβˆ’ (1βˆ’π‘ž)

𝑁

π‘žπ‘+(1βˆ’π‘ž)𝑁 1βˆ’π‘ž

π‘ž

𝑁 π‘Žπ‘

2 +π‘Žπ‘

1

, where in the third equation we multiplied everything byπ‘Ž

2 β‰  0 and in the last one we used the fact that

π‘Ž1π‘Ž

2 = 1 4

1

𝛿2π‘ž2 βˆ’ 1βˆ’4π‘ž 𝛿2+4π‘ž2𝛿2

π‘ž2𝛿2

= 1βˆ’π‘ž π‘ž

Β·

What is left to explain is why𝑒𝐿(0.5) would be the same as𝑒𝐻(0.5)which would save us time solving the second analogous system. Before, in state 𝐻𝑖𝑔 β„Ž, we had a random walk with a drift π‘ž towards the boundary with higher utility, level 𝑁. When we condition on state being𝐿 π‘œπ‘€ we have the same drift but in the opposite direction, towards levelβˆ’π‘. But notice that utilities of levelβˆ’π‘ in state𝐿 π‘œπ‘€ and

level 𝑁 in state 𝐻𝑖𝑔 β„Ž are equal to each other. The same is true for the other two boundary utilities. Moreover, in both states we have the same underlying lattice for the random walk. Therefore, problem (3.7) is the same problem as (3.6) up to renaming levels. This concludes the proof.

Proof of Lemma 14. This is a classic Gambler’s ruin problem (Feller, 1968)

Proof of Proposition 15 . For notation simplicity we will write 𝑒(𝑛) instead of 𝑒(𝑝(𝑛)).

We know, that the social planner adopts a symmetric strategy of stopping at levels 𝑁orβˆ’π‘. Suppose that if we increase the stopping boundaries from𝑁 to𝑁+1 and fromβˆ’π‘toβˆ’π‘βˆ’1 correspondingly such that the utility at level𝑁,𝑒(𝑁), increases.

Notice that the utility at level 0, given this strategy, is equal to the utility at the boundary levels multiplied by the expected discount factor plus some utility that we collect on the way to it. The randomness comes from the hitting the boundary levels time. We divide this expression in two parts: for the level𝑁 andβˆ’π‘.

𝑒(0.5) =π‘Žβˆ’π‘ +𝑒(βˆ’π‘)Eβˆ’π‘(𝛿𝑑) +π‘Žπ‘+𝑒(𝑁)E𝑁(𝛿𝑑),

whereπ‘Žβˆ’π‘,π‘Žπ‘ are constantsE±𝑁(𝛿𝑑) is the expected discounted factor until we hit the corresponding boundary.

Recall that 𝑒 is symmetric and so 𝑒(βˆ’π‘) = 𝑒(𝑁), therefore if we increase 𝑒(𝑁) then we increase𝑒(0.5)also and vice versa. This concludes the proof.

Patient planner

Now we would like to see what happens to the optimalπ‘βˆ— and the expected utility, when the social planner becomes more patient. In other words, how does the optimal π‘βˆ— and 𝑒(0.5) behave, when 𝛿 β†’ 1. We first establish a condition for when π‘βˆ— does not go to∞ when𝛿 β†’ 1 in Proposition 16: precision also has to go to 1 at a certain speed in this case. Secondly, we look at our expected utility as𝛿 increases, butπ‘žstays fixed in Proposition 17. There, we stumble upon (1βˆ’π‘ž)/π‘žfactor for the third time.

Proof of Proposition 16. Recall that the expected utility when we start with prior 1/2 (at level𝑁) and stop the random walk upon arrival at levels 2𝑁 or 0 is

𝑒(𝑁)=

βˆ’

βˆ’ (1βˆ’π‘ž)

𝑁

(1βˆ’π‘ž)𝑁+π‘žπ‘ +π‘ž 1βˆ’

π‘ž π‘ž

𝑁 +

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž π‘Žπ‘

1 +π‘Žπ‘

2

+π‘ž, whereπ‘Ž

1 < π‘Ž

2and

π‘Žπ‘– = 1 2

1 π›Ώπ‘ž

Β± s

1 (π›Ώπ‘ž)2 βˆ’ 4

π‘ž +4

! .

In order to find bounds on optimal 𝑁 ∈ Rwe are going to focus on the first term of𝑒(𝑁) as the second one is just a constant that does not affect optimal𝑁, and let π‘ž =1βˆ’πœ€, 𝛿=1βˆ’π›Ύwhereπœ€, 𝛾 β†’0.

Notice, that𝑒(𝑁)is single-peaked in 𝑁, therefore, if𝑁 maximizes it overRandπ‘βˆ— maximizes overNthen π‘βˆ— is either d𝑁e or b𝑁c. Thus, bounds on 𝑁 are going to give very tight bounds on π‘βˆ—. First, considerπ‘Žπ‘–:

π‘Žπ‘– = 1 2

1Β±p

1βˆ’4𝛿2π‘ž+4π‘ž2𝛿2βˆ’4π‘ž 𝛿+4π‘ž 𝛿 π›Ώπ‘ž

= 1 2

1Β±p

(2π‘ž π›Ώβˆ’1)2+4π‘ž 𝛿(1βˆ’π›Ώ) π›Ώπ‘ž

= 1 2

1Β± (2π‘ž π›Ώβˆ’1+2π‘ž 𝛿𝛾 𝑐

1) +π‘œ(𝛾) π›Ώπ‘ž

, where the thirds equality comes from Taylor series expansion of

√

π‘Ž2+π‘₯around 0.

Therefore,

π‘Ž1= 1 π›Ώπ‘ž

βˆ’1βˆ’π‘‚(𝛾) π‘Ž2=1+𝑂(𝛾) Now we can take derivative of𝑔.

𝑔0(𝑁) =

(1βˆ’π‘ž)𝑁ln(1βˆ’π‘ž) ( (1βˆ’π‘ž)𝑁+π‘žπ‘)βˆ’(1βˆ’π‘ž)𝑁( (1βˆ’π‘ž)𝑁ln(1βˆ’π‘ž)+π‘žπ‘lnπ‘ž) ( (1βˆ’π‘ž)𝑁+π‘žπ‘)2

(1βˆ’π‘ž)𝑁

π‘žπ‘ +π‘š

1

1βˆ’π‘ž π‘ž

𝑁 ln1βˆ’π‘žπ‘ž

(π‘Žπ‘

1 +π‘Žπ‘

2)2 +

+

π‘žπ‘lnπ‘ž( (1βˆ’π‘ž)𝑁+π‘žπ‘)βˆ’π‘žπ‘( (1βˆ’π‘ž)𝑁ln(1βˆ’π‘ž)+π‘žπ‘lnπ‘ž) ( (1βˆ’π‘ž)𝑁+π‘žπ‘)2 (π‘Žπ‘

1 +π‘Žπ‘

2) βˆ’ (π‘Žπ‘

1 lnπ‘Ž

1+π‘Žπ‘

2 lnπ‘Ž

2)π‘š

2

(π‘Žπ‘

1 +π‘Žπ‘

2)2

,

where π‘ž β‰₯ π‘š

1=

βˆ’ (1βˆ’π‘ž)𝑁 (1βˆ’π‘ž)𝑁 +π‘žπ‘

+π‘ž

β‰₯ 2π‘žβˆ’1 1βˆ’π‘ž β‰₯ π‘š

2=βˆ’

βˆ’ (1βˆ’π‘ž)𝑁 (1βˆ’π‘ž)𝑁+π‘žπ‘

+π‘ž 1βˆ’π‘ž π‘ž

𝑁 +

π‘žπ‘

π‘žπ‘ + (1βˆ’π‘ž)𝑁 βˆ’π‘ž

β‰₯ 01

In order to satisfy F.O.C. nominator should be equal to 0

βˆ’

(1βˆ’π‘ž)π‘π‘žπ‘ln1βˆ’π‘žπ‘ž ( (1βˆ’π‘ž)𝑁+π‘žπ‘)2

(1βˆ’π‘ž)𝑁 π‘žπ‘

+π‘š

1

1βˆ’π‘ž π‘ž

𝑁 ln

π‘ž 1βˆ’π‘ž

+

π‘žπ‘(1βˆ’π‘ž)𝑁ln1βˆ’π‘žπ‘ž ( (1βˆ’π‘ž)𝑁 +π‘žπ‘)2

! (π‘Žπ‘

1 +π‘Žπ‘

2)βˆ’

βˆ’ (π‘Žπ‘

1 lnπ‘Ž

1+π‘Žπ‘

2 lnπ‘Ž

2)π‘š

2=0

Notice that the left-hand side is smaller than

≀ π‘š

1

1βˆ’π‘ž π‘ž

𝑁 ln

π‘ž 1βˆ’π‘ž

! (𝑐

1π‘Žπ‘

2) βˆ’ (π‘Žπ‘

2 ln(1+𝑂(𝛾)))π‘š

2

≀ π‘š

1𝑐

1

1βˆ’π‘ž π‘ž

𝑁 ln

π‘ž 1βˆ’π‘ž

! π‘Žπ‘

2 βˆ’π‘Žπ‘

2𝛾 π‘š

2, where𝑐𝑖 > 1.

In order for this to be non negative we need𝑁 to satisfy the following constraint

𝑁 β‰₯ ln

π‘š2𝛾 𝑐1π‘š

1ln(π‘ž/(1βˆ’π‘ž)) ln1βˆ’π‘ž

π‘ž

β‰₯ π‘Ÿ

2ln𝛾 lnπœ€

,

whereπ‘Ÿ

2goes to 1 as 𝛾and𝛿 go to 0. At the same time, notice that LHS is bigger than

β‰₯ π‘š

1

1βˆ’π‘ž π‘ž

𝑁 ln

π‘ž 1βˆ’π‘ž

π‘Žπ‘

2 βˆ’π‘Žπ‘

2𝛾 𝑐

3π‘š

2, where 𝑐

3 > 1. In order for this to be non positive 𝑁 should satisfy the following constraint

𝑁 ≀ ln

𝛾 𝑐3π‘š

2

π‘š1ln(π‘ž/(1βˆ’π‘ž)) ln1βˆ’π‘ž

π‘ž

Β·

1As𝑁increasesπ‘š

1gets very close toπ‘žandπ‘š

2to 1βˆ’π‘ž.

Notice, that if(ln𝑐

3𝛾) is smaller or proportional to ln(βˆ’ln( (1βˆ’π‘ž)/π‘ž))then 𝑁 is always finite and the lower bound is satisfied whenπ‘ž, 𝛿→ 1, as

ln

βˆ’ln1βˆ’

π‘ž π‘ž

ln1βˆ’π‘žπ‘ž

β†’ 0

Otherwise, we get that

𝑁 ≀ ln𝛾 π‘Ÿ1lnπœ€

, for some constantπ‘Ÿ

1. Moreover, if𝑁satisfies these constraints then π‘βˆ— satisfies ln𝛾

π‘Ÿ1lnπœ€ π‘Ÿ

β‰₯ π‘βˆ— β‰₯ π‘Ÿ

2ln𝛾 lnπœ€

. This concludes the proof.

Proof of Proposition 17. Recall that

𝑒(𝑁) =

βˆ’

βˆ’ (1βˆ’π‘ž)

𝑁

(1βˆ’π‘ž)𝑁+π‘žπ‘ +π‘ž 1βˆ’π‘ž

π‘ž

𝑁 +

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž π‘Žπ‘

1 +π‘Žπ‘

2

+π‘ž . π‘Ž1= 1

(1βˆ’π›Ύ)π‘ž

βˆ’1βˆ’π‘‚(𝛾) π‘Ž2=1+𝑂(𝛾)

After plugging expressions forπ‘Ž

1andπ‘Ž

2in𝑒(𝑁)we get

𝑒(𝑁) βˆ’π‘ž =

βˆ’

βˆ’ (1βˆ’π‘ž)

𝑁

(1βˆ’π‘ž)𝑁+π‘žπ‘ +π‘ž 1βˆ’π‘ž

π‘ž

𝑁 +

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž 1βˆ’π‘ž

π‘ž βˆ’π‘‚(𝛾)𝑁

+ (1+𝑂(𝛾))𝑁

=

βˆ’

βˆ’ (1βˆ’π‘ž)

𝑁

(1βˆ’π‘ž)𝑁+π‘žπ‘ +π‘ž 1βˆ’

π‘ž π‘ž

𝑁 +

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž 1βˆ’π‘ž

π‘ž

𝑁

βˆ’π‘œ(𝛾) +1+𝑂(𝛾 𝑁)

=

βˆ’

βˆ’ (1βˆ’π‘ž)

𝑁

(1βˆ’π‘ž)𝑁+π‘žπ‘ +π‘ž 1βˆ’

π‘ž π‘ž

𝑁 +

π‘žπ‘

π‘žπ‘+(1βˆ’π‘ž)𝑁 βˆ’π‘ž 1βˆ’π‘ž

π‘ž

𝑁 +1

βˆ’π‘‚(𝛾 𝑁).

From the proof of Proposition 16 we know that whenπ‘žis fixed𝑁behaves as𝑂(ln𝛾). Now, let us transform the first term into a more clear form

𝑒(𝑁) βˆ’π‘ž =

π‘žπ‘

π‘žπ‘ + (1βˆ’π‘ž)𝑁 βˆ’π‘ž

βˆ’

π‘žπ‘βˆ’(1βˆ’π‘ž)𝑁 π‘žπ‘+(1βˆ’π‘ž)𝑁

1βˆ’π‘ž π‘ž

𝑁 1βˆ’π‘ž

π‘ž

𝑁 +1

βˆ’π‘‚(𝛾 𝑁)

As𝑁 β†’ ∞andπ‘ž > 1

2, ( (1βˆ’π‘ž)/π‘ž)𝑁 β†’0. Suppose( (1βˆ’π‘ž)/π‘ž)𝑁 =𝜈then π‘žπ‘

π‘žπ‘ + (1βˆ’π‘ž)𝑁 β‰₯ 1βˆ’πœˆ

and

π‘žπ‘βˆ’(1βˆ’π‘ž)𝑁 π‘žπ‘+(1βˆ’π‘ž)𝑁

1βˆ’π‘ž π‘ž

𝑁 1βˆ’π‘ž

π‘ž

𝑁 +1

≀ 𝜈

𝜈+1

≀ 𝜈 . From the proof of Theorem 16𝜈 =𝑂(𝛾). Hence,

𝑒(𝑁) βˆ’π‘ž β‰₯ 1βˆ’π‘žβˆ’2πœˆβˆ’π‘‚(𝛾 𝑁) =1βˆ’π‘žβˆ’π‘‚(𝛾ln𝛾).

It means that as𝑁 β†’ ∞,𝑒(𝑁)goes to its maximal value as𝐢( (1βˆ’π‘ž)/π‘ž)𝑁·𝑁 β†’ 0, which is quicker than ( (1βˆ’π‘ž)/π‘ž)𝑁 π‘˜, for anyπ‘˜ < 1. Also, as optimalπ‘βˆ—increases then absorbing beliefs are further away from 1/2.

Dalam dokumen Essays on Social Learning and Networks (Halaman 92-100)