• Tidak ada hasil yang ditemukan

PROBABILITY SPACE

Remark 1. We saw above that for any event Aא ࣠

1. Consider a family with two children. Denote the boy and the girl by letters

4.2. Independence

The concept of independence of two or more events (or trials), in a certain sense, occupies a central place in the probability theory. From the mathematical point of view, this concept defined the uniqueness that distinguishes the probability theory in the general theory, dealing with the study of measurable spaces with measure. We should also note that one of the founders of the probability theory, the outstanding scientist A. Kolmogorov, paid special attention to the fundamental nature of the concept of independence in the probability theory in the thirties of the last century (see [12]) A. Kolmogorov, Basic concepts of the probability theory. – Moscow, ed. «Science», 1974).

Below we first dwell on the concepts of independence of events, after extending this notion to partitions and to the algebra of sets, in conclusion we will consider the independence of trials and V-algebras.

4.2.1. Independence of events

Let a sample space ሺ:ǡ ࣠ǡ ܲሻ and events А,Вא ࣠ be given. If a conditional probability of an event A under the condition that an event B

P B( ) 0!

occurred is equal to (unconditional) probability of an event A, i.е.

/

P A B P A , (3′) then it is natural to assume that the event A does not depend on the event B. If this is so (i.e, (3') takes place), then from the formula of conditional probability (3) we obtain the formula

(P AB) P A P B( ) ( ). (4)

106

Now let P(A)!0 and condition (3') be satisfied. Then, in view of the fact that (4) holds, we obtain

), ) (

( ) ( ) ( ) (

) ) (

/

( P B

A P

A P B P A P

BA A P

B

P (5)

i.е. the event В does not depend on the event А.

From what has been said, we come to the following conclusion: the concept of independence of two events is a symmetric concept – if the event A does not depend on the event B, then the event B does not depend on the event A. But the above formulas (3'), (5) have one drawback – they require the condition of strict positiveness of the probabilities that stand in the denominators in formulas (3') and (5): ( ) 0P A ! or

( ) 0

P B ! . But, as we noted earlier (§1, point 1.2), the event can have a probability of 0 (zero), but it can happen. Therefore, in the formulas (3') and (5), the requirements

( ) 0

P A ! or ( ) 0P B ! restrict the domains of applicability of these formulas and the concept of independence of events. Therefore, relation (4), which is a consequence of definitions (3') and (5), but which does not require conditions ( ) 0P A ! or ( ) 0P B ! , is taken for the definition of independence.

Definition 2. If the probability of the product of events A and B is equal to the product of the probabilities of events A and B, i.e. if relation (4) is satisfied, then the events A and B are called independent events.

We obtain from the definition, that if P(A) 0, then for any В, Р(АВ) = 0 =

= Р(А)Р(В) (because ABŽ A, therefore 0dP AB( )dP A 0, i.е. P AB( ) 0) i.е. (4) takes place. In other words, if Р(А) = 0, then А and any event В are independent.

We now formulate several assertions related to independence in the form of a theorem.

Theorem 2. а) If ( ) 0P B ! , then independence of events А and В, i.е. ratio (4), is equivalent to condition ( / )P A B P A( ).

b) If A and B are independent events, then events A B, ( , )A B and A B, are also independent events;

c) If ( ) 0P A or ( ) 1P A , then А and any event В are independent;

d) If А and B1 are independent events, A and B2 are independent events, while B B1 2 ‡, then А and B1B2 are independent events.

Proof. а) In this case (3′) implies (4) (we saw this above). If, however, (4) holds, then

) ) (

( ) ( ) ( ) (

) ) (

/

( P A

B P

B P A P B P

AB B P

A

P ,

i.е. the formula (3') is correct.

b) It suffices to show that condition (4) implies relations

( ) ( ) ( ),

P AB P A P B P AB( ) P A P B( ) ( ).

107

Indeed, since

,

B AB AB A B ABB AB,

then by the properties of probabilities and the definition (4) it can be written ( )P B P AB( )P AB( ) P A P B( ) ( )P AB( ),

1

P AB P B P A P B P B P A P A P B

( ) 1 ( ) 1 ( ) ( ) ( )

1 ( ) ( ) ( ) ( ) (1 ( ))(1 ( )) ( ) ( ).

P AB P A B P A B P A P B P AB P A P B P A P B P A P B P A P B

1 ( ) 1 ( )

( ) ( )

1 ( ) 1 ( )(

111 ((( ) 1) 1) 1 ( )(

с) The case ( ) 0P A was proved above (immediately after the definition).

The validity of the assertion in the case when ( ) 1P A is a consequence of the assertion for the case ( ) 0P A and assertion b): if P(A) 0, then P(A)= 1 and vice versa.

As a consequence of this statement (which is proved very simply), we get the following statement: an impossible event and any other event are independent; a certain event and any other event are independent.

d) This property follows from the chain of the following equalities:

1 2 1 2 1 2

1 2 1 2 1 2

( ( )) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( )( ( ) ( )) ( ) ( )

P A B B P AB AB P AB P AB

P A P B P A P B P A P B P B P A P B B

ז

Remark 2. If the condition B B1 2 ‡ is not satisfied, then assertion d) of Theorem 2 may turn out to be false (Give an example).

The concept of independence of two events introduced by Definition 1 is called statistical or stochastic independence (these terms are synonyms).

Usually the independence of A and B is not established by means of equality (4), but is postulated on the basis of some considerations. Using equality (4), we calculate the probability P(AB), knowing the probabilities P(A) and P(B) of two independent events.

When establishing the independence of events A and B, the following principle is often used: «Events A and B, real pre-images A~

and B~ of which are causally inde- pendent, are independent (stochastically independent)».

The real meaning of this principle can be related to the property of frequency stability. Suppose that, for n observations, events AA,BB and ABABBB occur and appeared

( ) , ( ), ( )

n A)) ,) , n B((( ),),),),), n AB((((( )) times (respectively).

Since from the stability of the frequencies it follows that

( ) ( ),

n A P A n )|

( ) n B( ) ( ), n )|P B

( ) n AB( ) ( ), n )|P AB

) ((

(

108

( ) ( / ) ( )

( ) ( )

n AB P A B P AB n B )))| ( P B

) P((

) |P(( ,

Then from the independence of events А and В, i.е. from P(A/B)=P(A) it follows that

( ) ( )

( ) n AB n A

n B)))))| ((( )(n )

) n

) | ,

or, equivalently,

( ) ( ) ( )

n AB n A n B

n )))))| ( )( )( )( )( )n ˜ ((( )(n ). (6) The property (6) for causally independent real events A~ and B~ is established by the centuries-old practice of humanity. This allows us to formulate the above principle.

It should be noted that this principle is by no means a theorem. Since it is not formulated in terms of a mathematical model, it cannot be a theorem. It is obvious that from the stochastic independence of events A and B, the causal independence of their real prototypes A~ and B~ does not follow. If the probability model is slightly modified, then independence can disappear.

Examples

5. Two dice are tossed. Consider the events: A={«1» point occurs on the 1st dice}, В={«2» points occur on the 2nd dice}, С={the sum of the dropped points is less than or equal to 3}.

Then:

^

i j i j, : , 1,...,6 ,

`

: A

^

1,j : j 1,...,6 ,

`

B

^

i,2 :i 1,...,6 ,

`

` ^ `

^

, : 3 1,1 , 1,2 , 2,1

C i j i j d ,

therefore

P A 1

P B 6, 1 1 1

36 6 6

P AB ˜ P A P B , 1 ( ) 12

P C ,

1 1 1

( ) ( ) ( ).

18 12 6

P AC z ˜ P A P C

Thus, A and B are stochastically independent events, but A and C are not independent. Also B and C are dependent events (Prove!)

4. Consider families with three children and we will assume that all eight possible outcomes are «bbb», «bbg», «bgb», ..., «ggg» («b» is a boy, «g» is a girl, «bgb» means that the older and younger children are a boy, and the middle child is a girl, etc.). We introduce the events: A = {there are both boys and girls in the family}, B = {no more than one daughter in the family}. Then

109 ( ) 3,

P A 4 ( ) 1, ( ) 3,

2 8

P B P AB

therefore

( ) ( ) ( ) P AB P A P B .

The last equality means that A and B are independent events. But it turns out that for a family with two or four children, these events will be already dependent (Check!)

Assume that pairwise independent events А, В, С are given:

( ) ( ) ( ), ( ) ( ) ( ), ( ) ( ) ( )

P AB P A P B P BC P B P C P AC P A P C . (7) We pose the following question: does the independence of the events A,B and C follow from the pairwise independence of A, B, C (i.e. from (7)), i.e., is the formula

( ) ( ) ( ) ( ) ( ) ( )

P ABC P AB P C P A P B P C . (8) correct?

The answer is negative. Let us give an example.

6. An example of Bernstein. Let we have a tetrahedron made of a homogeneous material and let its three faces be painted in three different colors – red (event A), blue (event B) and green (event C), and the fourth face is colored in all three colors (the ABC event). The experiment consists of one throwing of this tetrahedron on the plane and we will assume that if the tetrahedron fell on the face with some color, then the event signified this color occurred.

Then ( ) 1,

P A 2 because red color exists on two faces of the tetrahedron.

Similarly, ( ) ( ) 1

P B P C 2, since two different colors are found only on one face,

1 1 1

( ) ( ) ( ) ,

4 2 2

P AB P BC P AC ˜

and these last relations mean the pairwise independence of the events A, B, C (conditions (7) are satisfied)).

Further, since only one face of the tetrahedron is colored in all three colors, then P(ABC) = 1/4. So

1 1 1 1

( ) ( ) ( ) ( ) ,

4 P ABC zP A P B P C ˜ ˜2 2 2 i.e. the condition (8) is not satisfied.

110

6. Let :

^

Z Z Z Z0, ,1 2, 3

`

and 1 ( )i 4

P Z

i 0,1,2,3 .

We introduce the following events: Ai

^

Z0,Zi

`

,i 1,2,3. Then

1 1 1

( ) ( ) ( ) , 1,2,3; ,

4 2 2

i j i j

P A A ˜ P A P A i j iz j

1 2 3 1 2 3

1 1

( ) ( ) ( ) ( ) .

4 P A A A zP A P A P A 8

Therefore, events A A A1, ,2 3 are pairwise independent, but events A Ai j and Ak (i, j,k 1,2,3 are different indices) are dependent.

Note that practically examples 5 and 6 refer to the same probabilistic model.

If events А, В, С are pairwise independent and, moreover, the probability of their product is equal to the product of their probabilities (Р(АВС)(А)Р(В)Р(С), see formula (8)), then they are called mutually independent (or simply independent) events.

By analogy, the following definition generalizes the definition of independence to the general case.

Definition 3. Let A1,A2,...,An be the events which are defined on the same probability space ሺ:ǡ ࣠ǡ ܲሻ.

Therefore, if for any index 1d di1 i2 ... ir n, r 2,3,..., ,n the equalities

1 2 1 2

( i i... )ir ( ) (i i )... ( ir)

P A A A P A P A P A , (9) hold, then the events A1,A2,...,An are called mutually independent (or simply independent) events.

In the formula (9), which gives a condition of independence of n events, there are 2n n1 conditions.

Really,

if r 2, then Cn2 conditions correspond to this case in (9):

( i j) ( ) ( ),i j 1 ,

P A A P A P A d di j n (92) if r 3, then Cn3 conditions correspond to this case in (9):

), ( ) ( ) ( )

(AiAjAk P Ai P Aj P Ak

P 1d di j k n, (93) etc., if r n, then Cnn 1 condition corresponds to this case in (9):

1 2 1 2

( ... )n ( ) ( )... ( )n

P A A A P A P A P A . (9n)

111

Thus, the total number of conditions in (9)

1 2

1 1

... 0 1

3

2C C C C n

Cn n nn n n n n .

The relations (92), by definition, are the conditions for the pairwise independence of n events A A1, 2,..., An.

It follows from the definition that if events A A1, 2,..., An are independent, then events of any subset of events A Ai1, i2,..., Aikሺʹ ൑ ݇ ൑ ݊ െ ͳሻare also independent.

Definition 3 also implies the following property of conditional probabilities.

Theorem 3. If events A A1, 2,...,An are independent; indices i i1 2, ,...,ir and

1, ,...,2 k

j j j , given from the set 1, 2,...,n, are different;

1 2

( ... ) 0

i i ir

P A A A ! , then

1 2 1 2 1 2

( ... / ... ) ( ... )

k r k

j j j i i i j j j

P A A A A A A P A A A . (10) Proof. I.e. the events A A1, 2,..., An be independent, then the events

1, 2,...,

i i ir

A A A

and the events

1 2...

j j jk

A A A are independent. Therefore

1 2 1 2

( ... ) ( ) ( )... ( ),

k k

j j j j j j

P A A A P A P A P A

1 2 1 2

( i i... )ir ( ) ( )... ( ).i i ir P A A A P A P A P A and

).

( ...

) ( ) ( ...

) ( ) ...

...

(Ai1Ai2 AirAj1Aj2 Ajk P Ai1 P Air P Aj1 P Ajk

P ˜ ˜ ˜ ˜

Now it remains to reveal the conditional probability on the left-hand side of (10) according to the conditional probability formula.ז The following definition gives a generalization of the concept of independence of events to a sequence of independent events.

Definition 3′. Let A1,A2,... be a sequence of events of some probability space ሺ:ǡ ࣠ǡ ܲሻ.

If for any set of indices 1di1i2 ...dir dn; r = 2,3,…,n; n = 2,3,… the conditions (9) are satisfied, then such a sequence of events is called a sequence of independent events.

It is clear that this definition is equivalent to the fact that for any n 2,3,... any n events, taken from the sequence A1,A2,..., are independent.

Example 7. Let’s show that, if A1,A2,... is a sequence of independent events, then

–

¸¹

¨ ·

©

§ f f

1

1 k k

k Ak P A

P .

112

Solution. Let’s introduce for n 1,2,... a sequence of events n

k k

n A

B

1

. Then

1

n n

B ŽB ,

1 1

n k k

k k

B p B f BBkkk f AAkkk, consequently, by the axiom of continuity from below (axiom Р3″)

. lim

lim )

(

1 1

1 of of

– –

f

f

¸¸¹

·

¨¨©

§

k k n

k n k n n

k

k PB P A P A

A P B

P

.

4.2.2. Independence of partitions and algebras.

Independent trials. Independence of σ-algebras

Let :

^

Z

`

be a sample space. In §2, p. 2.1 we introduced the concept of partition.

Let’s recall this definition and some other results connected with it.

If DiŽ:, D Di j ‡(iz j), i

i

D :

¦

, then the system

^

D1,D2,...,Dn,...

`

D

is called a partition of :, and the sets Di are called atoms of this partition.

A sets system

¿¾

½

¯®

­¦n D i zi jzl D  nf

j ij : j l , ij ,

) (

1 D

D D .

is called an algebra, generated by the partition D.

Also in §2, p. 2.1 the theorem about the fact that «any finite algebra of sets is generated by some finite partition» was proved.

Definition 4. Let the partitions D1,D2,...,Dn be given.

If for any indexes 1d j1 j2 d... js n, s 2,3,...,n the atoms

s

s j

j j j j

j D D

D1D1, 2D2,..., D are independent, i.е. the following conditions take place:

Dj Dj Djs

PDj PDj PDjs

P 1 2... 1 2 ... ,

then the partitions D1,D2,...,Dn are called independent partitions.

113

Definition 5. Let algebras ࣛǡ ࣛǡ ǥ ǡ ࣛbe given.

If for any indexes 1d di1 i2 ... ir n, r 2,..., ,n and events ܣࣛ, 1,..., ,

j r the conditions

) ( )...

( ) ( ) ...

(Ai1Ai2 Air P Ai1 P Ai2 P Air

P ˜ ˜ ,

are satisfied, then the algebras ࣛǡ ࣛǡ ǥ ǡ ࣛ are called independent algebras.

The concepts of sequences of independent partitions and sequences of independent algebras are defined by analogy with the concept of a sequence of independent events (see definition 2′).

From the definitions 4, 5 we obtain that any subsets of k

2d dk n

partitions (algebras) out of n n

t2

independent partitions (algebras) are also independent partitions (algebras). Similarly, any subsequence of independent partitions (algebras) is also a sequence of independent partitions (algebras).

Theorem 4. In order for partitions D1,D2,...,Dn to be independent, it is necessary and sufficient that the algebras ࣛൌ ߙ൫D൯ǡࣛൌ ߙ൫D൯ǡ ǥ ǡ ࣛ ൌ ߙ൫D൯ǡgenerated by them, are the independent algebras.

Proof. Since DŽߙ൫D൯ ൌ ࣛ, then the independence of the algebras

ǡ ࣛǡ ǥ ǡ ࣛ implies the independence of partitions D1,D2,...,Dn. On the other hand, by the above-mentioned Theorem 1 of §2, each of the events ܣࣛ can be written as the sum of some atoms of the partition Di:

isl

is

i D D

A 1 ...

From the independence of the partitions it follows that all the events represented in the form of the sums of the atoms of the corresponding partitions are also independent (for example, if D1

^

D D D11, 12, 13

`

, D2

^

D D21, 22,D23

`

and they are independent, then D11D12 and D22D23 are independent events, because:

^ ` ^ `

11 12 22 23 11 22 11 23 12 22 12 23

11 22 11 23 12 22 12 23

11 12 22 23 11 12 22 23

( ) ( )

( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( ) ).

P D D D D P D D D D D D D D

P D P D P D P D P D P D P D P D

P D P D P D P D P D D P D D

˜ ˜

22

((( 222222

But this, by definition 5, gives independence of algebras

ǡ ࣛǡ ǥ ǡ ࣛ. ז Any event Az‡ generates a partition DA

^ `

A,A , and this partition in its turn generates an algebra ߙ൫Dʏ൯ ൌ ࣛʏ ൌ ሼ‡ǡ Aǡ Аǡ:ሽ. Since independence of events

114

,

A B implies independence of A and B, A and B, A and B(§4, p. 4.2, theorem 2), then we obtain the validity of the following theorem.

Theorem 5. Independence of events A1,A2,...,An is equivalent to the indepen- dence of the algebras ࣛʏǡ ࣛʏǡ ǥ ǡ ࣛʏ generated by them.

Now we will briefly consider on the concepts of independent trials and the se- quence of independent trials.

Recall that under the trial we will understand an experiment, the outcomes of which are those or other (random) events. In our axiomatics, the trial is a probability space.

For simplicity, first consider the case of two trials G1 and G2.

Let (:, ࣠, ܲ and (:, ࣠, ܲ be the probability spaces corresponding to the tests G1 and G2.

If these probabilistic spaces are the models of some causally independent trials, then the σ-algebras ࣠ƒ†࣠ must be independent. It is natural to define stochastic independence as follows:

if any event of the probability space corresponding to the trial G1 does not depend on any event of the probability space corresponding to the trial G2, then such trials G1 and G2 are called independent trials.

The last said requires clarification, because to determine (stochastic) indepen- dence of events it is necessary that these events be determined on the same probability space. In other words, we must represent σ-algebras ࣠ƒ†࣠as σ-subalgebras of some σ-algebra on a common probability space ሺ:ǡ ࣠ǡ ܲሻ.

Such a probability space can always be constructed. To do this, we construct the probability space ൫:ǡ ࣠ǡ ܲ൯ǡneeded for the «compound» experiment G, as a direct product of probability spaces corresponding to the experiments G1 and G2.

More precisely, in the constructed new probability space ൫:ǡ ࣠ǡ ܲ൯we define the sample space : as a direct product of :1 and :2: : :1u:2; construct the

σ-algebra ࣠ as the direct product of the σ-algebras ࣠ƒ†࣠: ࣠ = ۪࣠࣠=

={A1×A2:A1࣠ǡ A2 ࣠}; define the function P as a probability function onሺ:ǡ ࣠ሻ.

Definition 6. If for any events A = A1×A2, A1 ࣠ǡ A2 ࣠ the following condition is satisfied

A1 A2

P1

A1 P2 A2 P

A1 2 P 1 A2

P A

P u u: ˜ : u , 11

then the trials G1 and G2 are called independent trials.

We will show that this definition is indeed a determination of the independence of trials G1 and G2, i.е. it gives an independence of any event of the trial G1 from any event of the trial G2.

If we introduce events A'1 u:A1 2, A'2 : u1 A2, then for an event A1 A2 to occur, it is necessary and sufficient that an event A'1 A'2 occurs, since : :1 2 is a certain event of the trial G1(G2). This establishes a one-to-one correspondence

115

i

i A

A' l : Ai is Aci in the space ൫:ǡ ࣠൯ǡ i.е., as the events of the probability space ൫:ǡ ࣠ǡ ܲ൯ǡ the events A'1 and A'2 (and therefore the events A1 (event of the trial G1) and A2 (event of the trial G2)) are independent. And this, in turn, means independence of the σ-algebras G1 and G2.

The concept of independence of the trials G G1, 2,...,Gn is defined similarly: we assign to each trial Gi the probability space (:, ࣠, ܲሻ and form a «composite»

probability space ሺ:ǡ ࣠ǡ ܲሻ as follows:

, ࣠ ൌ ۪࣠ ۪࣠ ǥ ۪ ࣠

ൌ ሼA1A2 ൈ ǥ ൈ Anǣ A1࣠ǡ A2࣠ǡ ǥ ǡ An࣠ሽǡ the function P is defined as a certain probability function on ሺ:ǡ ࣠ሻ.

If for any events A1×A2×...×An ࣠the following condition takes place:

1 2 ... n

1

1 2 2 .... n n

P A Au u uA P A P A P A , (11′) then we will call the trials G1,G2,...,Gn independent trials.

Furthermore, if for any indexes 1d di1 i2 ... ik n, k 2,3,...,n, n 2,3,..., and events A࣠

Ai Ai Aik

Pi Ai Pi Ai Pik Aik

P 1u 2 u...u 1 1 2 2 ... ,

then the sequence of trials G1,G2,... will be called a sequence of independent trials.

Examples 8. Bernoulli scheme. In this model

^

Z Z: Z Z1, ,...,2 Z Zn : i 0,1 .

`

: ,

ࣛ ൌ ሼAǣA ك :},

Z, Z pZqn

P (12) where

1 ... n

Z Z Z is the number of successes.

Let Aࣛ. If this event is determined only by the value Zk, then we will say that this event depends on the trial at the k-th moment of time. Examples of such events are events

:n

u u : u :

: 1 2 ...

116

^

: 1 ,

` ^

: 0

`

k k k k

A Z: Z A Z: Z .

Now we consider the partition D and the algebra ࣛk generated by the event Ak

k 1,2,...,n

:

^

k, k

`

,

k A A

D

ൌ ߙ൫D൯ ൌ ൛‡ǡ:ǡ Akǡ AൟǤ Then it is not difficult to show that

,...

) ( ) ( ) ( , )

(A p P A A P A P A p2 k l

P k k l k l z

etc, in general, for k 2,3,...,n, ij zi j ll

z

PA

Aǥ Aሻ ൌ PAPAሻǥ PA.

The last relation shows the independence of events A1,A2,...,An, and this in its turn (in view of the assertions proved above) means the independence of partitions

Dn

D

D1, 2,..., and generated by them algebras ࣛǡ ࣛǡ ǥ ǡ ࣛǡ thereby the indepen- dence of the corresponding trials.

J. Bernoulli was the first scientist who studied the model examined and proved for it the so-called Large Numbers Law (this law will be considered in Chapter V, §1).

This model can be called a «model of independent trials, in which each trial has only two outcomes with a probability of success p». In the literature, this model is usually called the sequence of independent Bernoulli trials or simply the Bernoulli scheme (Ch. I, §2, p. 2.1).

9. Polynomial scheme. If each trial has r outcomes with the probabilities of their occurrences p1,...,pr (p p1 2 ... pr 1, rt3), then the probability that as a result of n independent trials (in a predetermined order) 1st outcome will occur n1 times, 2ndn2 times, ..., r-th – nr times, where n n1 2 ... nr n n, jt0, is equal to

1 2

1n 2n ... rnr p p ˜ ˜p

(because of the independence of the trials, the probability of the products of events is equal to the product of their probabilities, and, for example, if the 1st outcome occurs

n1 times, then in this product p1 occurs n1 times, etc.

In its turn, the number of ways of occurrence of events in an arbitrary order in accordance with the above-mentioned numbers of outcomes, i.е. the number of non- negative solutions of the equation

117

1 2 ... r , j 0,

n n n n n t is equal to (Ch. I, §1, p. 1.1, theorem 3)

1 2

!

! !... !r n n n n .

Therefore, the probability that, as a result of n independent trials, the outcome No. i will occur (in an arbitrary order) ni times, where

1 2

1,2,..., ; ... r

i r n n n n,

is equal to

1 2

1 2 1 2

1 2

( , ,..., ) ! ... .

! !... !

r r nr

n r r

r

P n n n n p p p

n n n ˜ ˜ (13) The described sequence of independent trials (model) is called a polynomial scheme, and distributions (13) – a polynomial (multinomial) distribution (see Chap. I,

§2, p. 2.1).

10. Negative binomial distribution. Let’s find the probability that in the

Dalam dokumen PROBABILITY THEORY AND MATHEMATICAL STATISTICS (Halaman 105-117)

Dokumen terkait