PDF Transformation: π π = ππ π&π β )*) π¦&, , π¦&, ππ π₯ πππ£ππ ππ π‘ππππ ππ π¦ TOPIC 1: ESTIMATION
Methods for Generating Estimators Method of Moments
Use the sample raw moments (from the data) to estimate the population raw moments There are k parameters to estimate: π,, β¦ , π<
The π=> raw (population) moment is π@A = πΈ π@ which is a function of the k parameters.
FN OF THE DATA ONLY
The π=> raw (sample) moment is π@A =,E EFG,πF@ = π@ which is a function of the π variables The sample moment can be used to estimate the population moment by solving the equations obtained by setting π@A = π@A ; π = 1,2, β¦ , π
Maximum Likelihood Estimator
Suppose π,, πK, β¦ , πE have joint pdf/pf π π₯; π (not necessarily i.i.d). The likelihood function is thought of as a function of π. It is a random function as it depends on the random vector π:
β π; π = πN π₯; π | PGN
Essentially, the likelihood function π is the joint density function written in terms of the rvs themselves.
If the πF are i.i.d with common pdf/pf, then
β π; π = π π₯F; π
E FG,
| PRGNR
Then maximise β or log likelihood with respect to π,, β¦ , π< by logic or calculus (maximum turning point)
Properties of Estimators 1. Unbiasedness
An unbiased estimator is one which has expected value equal to the value of the parameter (a constant) we are trying to estimate. General result: the expected value of a function of X is equal to the function of the expected value of X if the function is LINEAR. ππππ π = difference between πΈ π and π π
2. Variance - Choose the estimator with the smallest variance 3. Mean Squared Error
Let π be an estimator of π(π). The Mean Squared Error (mse) of π is:
ππ π π = πΈ π β π π K = π£ππ π + ππππ π K 4. Relative Efficiency
Define the relative efficiency of π, relative to πK in estimating π(π) as the ratio:
π πΈ =ππ π πK ππ π π,
If π πΈ is < 1, then πK is said to be a better estimator that π,. However, this may depend on π or π.
5. Consistency
A consistent estimator narrows in on the target value as π increases (bias and variance tends to zero). A biased estimator can be consistent but must have: ππππ β 0 ππ π β β.
TOPIC 2: SAMPLING DISTRIBUTIONS Distributions Derived from the Normal
1. Square of a Standard Normal (and sum of independent chi-sqs):
If π ~ π 0,1 , then π = πK ~ π,K, that if π,, πK, β¦ , πE are i.i.d π,K, then
π = πFK
E FG,
~ πEK And if π, ~ πdKe independently of πK ~ πdKf, then π,+ πK= πdKegdf
π 0,1 K ~ π,K β πdK ~ πππππ π£
2, 2 β πhK + πEK ~ πhgEK β ππ ππ£π πππ πππππππππππ‘ 2. Students π Distribution
Let π ~ π 0,1 , independently of U ~ πdK. Let π be the rv given by π = π
π/π£
Then we say that π ~ π‘d (π is distributed as Studentβs π‘ with π£ degrees of freedom).
3. The F Distribution
Suppose π ~ πdKe and π ~ πdKf are independent. Define πΉ by πΉ =π/π£,
π/π£K
Then πΉ ~ πΉde,df (we say that πΉ is distributed as πΉ with π£, and π£K degrees of freedom).
4. The Beta Distribution
Let π ~ Ξ(πΌ,, π½) independently of π ~ Ξ πΌK, π½ . Let, π΅ = π
π + π ~ Ξ(πΌ,, π½)
Ξ πΌ,, π½ + Ξ πΌK, π½ ~ πππ‘π πΌ,, πΌK 5. The Cauchy Distribution
π ~ π 0,1
π 0,1 ~ πΆππ’πβπ¦ β π ~π 0,1
π 0,1 ~ πΆππ’πβπ¦
Thus, π has the same pdf as π, although it is obviously not equal to π. (It equals π with probability ,K and βπ with prob. ,K.).
Sampling Distributions Sample Mean: π =1
π πF
E
FG,
Sample Variance: πK= 1
π β 1 πFβ π K
E FG,
β π β 1 πK
πK ~ πE&,K Let π,, β¦ , πE be i.i.d π π, πK . Then,
π ~ π π,πK π
1. Properties of πΏ and πΊπ when sampling from the normal
If πF i.i.d ~ π π, πK , π and πK are independent random variables. This is true even though πK is a function of π.
2. Distribution of the π- statistic π»β¬: π = πβ¬
π =π β πβ¬ π/ π When π»β¬ is true,
π ~ π‘E&, ~ π(0,1) πE&,K /(π β 1) 3. Distribution of the π statistic
Suppose π,,, π,K, β¦ , π,h are i.i.d π π,, π,K and πK,, πKK, β¦ , πKE are i.i.d π πK, πKK and independent of the other.
π»β¬: π,K= πKK β π»β¬:π,K πKK= 1 If π»β¬ is true, there is one variance, πK. Thus,
πΉ = π,K
πKK=π,K/πK
πKK/πK ~ πh&,K /(π β 1)
πE&,K /(π β 1) ~ πΉh&,,E&,
TOPIC 3: INTERVAL ESTIMATION (pivotal quantities) β AYSMPTOTIC DISTRIBUTION OF THE MLE 1a. Symmetric Confidence Intervals (ππ known)
β the 100 1 β πΌ % symmetric CI for π is π Β± π§β /KΓ ΛE 1b. Non-Symmetric Confidence Intervals (ππ known)
β the 100 1 β πΌ % CI for π is ββ, π + π§β Γ ΛE , π»π: π < π0
β the 100 1 β πΌ % CI for π is π β π§β Γ ΛE, β , π»π: π > π0 ππ unknown:
β the 100 1 β πΌ % symmetric CI for π is π Β± π‘E&,,β /KΓ Ε E