Vol. 31, 2012, pp. 53-61
Bayes Estimators of the Location Parameter
of Lognormal Distribution Under Some Symmetric and Asymmetric Loss FunctionsD. C. Nandi1 and Dr. M. K. Roy2
1Department of statistics, Comilla University, Comilla
2Department of statistics, Chittagong University, Chittagong Abstract
This paper concerned with the parameter (
) of Lognormal distribution have been developed by using Bayesian approach under some symmetric and asymmetric loss functions. A comparative study with Bayes estimators under these loss functions has also been provided.1. Introduction
In 1978 the Lognormal distribution was first introduced by Galton. It was then mainly used in the problems of Economics, Biology and reliability theory. In (1980,81), Chochran and Williams applied this distribution in Agriculture, Entomological and even literary research. In particular it arises in the study of dimensions of particles under pulverization.
If X be a continuous random variable, then the probability density function of Lognormal distribution is defined as
1 log 2
2 1 2 2
; , ; 0, , 0,
2
x
f x e x
x
(1.1) where, and 2are two parameter of the distribution. In particular if 2=1, then the probability density function from (1.1) reduces to
1 2
2 1 2log
; , ; 0, 0.
2
x
f X e x
x
(1.2) 2. Preliminaries
We shall estimate the Bayes estimator of Lognormal distribution by using following types of loss functions.
Squared Error (S.E ) loss function is the most commonly used loss function. It can be defined as
ˆ ˆ 2
( , ) () ; 0
L a a (2.1)
The 0- 1 type of loss function can be defined as
0 ˆ ˆ ˆ 1
( , )
if
L if (2.2)
The generalized 0 – 1,1 type of loss function can be defined as
1 2 1 2 ˆ 0
ˆ ˆ
( , ) 1
1 ˆ .
if
L if
if
(2.3)
Linear exponential loss function (LINEX) introduced by Varin (1975) is one of the very useful asymmetrical loss function. Varin proposed the LINEX loss function in the form
1 : )
( ) ln ( cX .
L E e
c (2.4)
MLINEX (Modified Linear exponential) loss function provided by Wahed(1994) and defined as
ˆ ˆ
ˆ
( , ) ln 1 ; 0; 0.
c
L w c c w (2.5)
3. Main Results.
Let, X(X1,X2,...,Xn) be a random sample of size n from the probability density function (1.1). Now we shall find the Bayes estimators of the parameter
for different types of prior densities under the different loss function (2.1),(2.2),(2.3),(2,4) and (2.5). The above function (2.1) and (2.2) are symmetric. The rest are asymmetric. The main results of the paper contains the following theorems depending on the likelihood function and posterior density.3.1 Bayes estimator
of lognormal distribution:Theorem 3.1.1. If a random sample of size n independent observation from lognormal distribution with parameter
and
2 1.Then under the loss function (2.1) the Bayes estimator can be defined aslog
ˆ 1
iBSE
x
n .
Proof: For the given sample X(X1,X2,...,Xn)the likelihood function can be defined as
1 2
(log ) 2
1
( ; ) 1 ; 0
( 2 )
xi
n i n i
L x e x
x
. (3.1)
Assuming the conjugate prior distribution of
as the normal distribution with mean and variance unity. ~ N (,1). So the probability density function of
is defined as1 2 2 ( )
( / ,1) 1 ; , .
2
g e (3.3)
Then the posterior pdf of
for the given sample is55
2
2 1
1 ) 2
2 1
1 2 2
1 1
2 1 2
1
( / ). ( ) ( / )
( )
1
( / ). ( ) (log
1 2
( 2 )
(log ) ( )
1 1
2 ( 2 )
n i i n
i n ni
i n i
i n i
L x g f x
L x g d x e
e x
x
e e d
x
1 log
2 1 2
1 log
2 1 2
1 2 1 2
i
i
n x
n
n x
n n e
n e d
,
which implies that
log 2
1 ; , 0
2 1
( / ) 1 2
xin x
n n
f x e . (3.3)
That is,
~ log , 1
1 1
xiN n n
.
It is note that the mean and mode of the posterior distribution coincides each other for for normal distribution. So the mean and mode of posterior distribution are
log ( / )
1
xiE x
n
log ( / )
1
o
i M
x
E x
n .
Since, under squared error loss function the Bayes estimator is the expectation of posterior density. So the Bayes estimator under squared error loss function can be defined as
log
ˆ 1
iBSE
x
n ,
which differs from the MLE given by
log ˆ
iMLE
x n
.
If
tends to zero and n is large tends to infinity, the estimator ˆ
BSE tends to the ˆ
MLE estimator.Theorem 3.1.2. If the loss function is (2.2) the Bayes estimator of
will be defined as(0,1) log
ˆ 1
iB
x
n , and
log 2
ˆ 1
iBL
x c
n .
Proof: Under the above loss function, the Bayes estimator of
will be the mean of posterior distribution and using (3.3) we have(0,1) log
ˆ 1
iB
x
n .
Theorem 3.1.3. If the loss function is (2.3) the Bayes estimator of
will be defined asˆ (0,11) log 1 2
1 2
iB
x n
.
Proof: Under the loss function (2.3), the Bayes estimator of
will be the mode of the posterior distribution and with average of two known small quantities
1and
2.
Sousing the value Mo we get
1 2
(0,11)
1 2
ˆ 2
log . ..
1 2
B
i Mo
x n
.
Theorem 3.1.4. For the LINEX loss function (2.4) the Bayes estimator can be defined as
ˆ 1 log
1 2
BLn
xic.
Proof: To prove the theorem we have to estimate
ˆ
BLas
2 1
2
log
1 1
/ 2
n i c
x
n n
E X e e d .
57
log 2
1 2( 1)
/
xi cc c n n
E e X e .
Thus the Bayes estimator under LINEX loss function is
log 2
1 2( 1)
ˆ 1ln
1 log .
1 2
xi c c
n n
BL
i c e
x c n
Theorem 3.1.5. If the loss function is (2.5), then the Bayes estimator can be defined as
1
( 2) ( 1)
ˆ ˆ 1 ( 1)ˆ 2 ( ˆ ˆ)
2
c
c c c
BML c c c .
Proof: Under MLINEX loss function stated in (2.5) the Bayes estimator is defined by
1
2 1
2
ˆ /
log
1 1
/ 2
c c
BML
n i
c c
E X
x
n n d
Now E X e
the expression can be evaluated by Lindly`s approximation as
( )
( ) ( ) ( )
( ) ( )
( )
( ) /
.
, ( ) ( )
p
L p P p p
L p P p p cp
L X E X
e dp
e dp
where e
L p Log likelihood function P p Log of prior distribution
According to the approximation L(x) can be evaluated as
'' ' '
"' '. 2 2'
ˆ 1 ˆ ˆ ˆ ˆ ˆ ˆ ˆ
( ) ( ) ( ) 2 ( ). ( ) ( ). ( ). .
2
, ˆ , ( )
( )
L x p p p p p L p p
where p MLE of P p p p
'
" 2
2 ''' 3
3 2
"
' ( 1)
" ( 1)
2 ( )ˆ
( )ˆ
( )ˆ ˆ
( ) ˆ ˆ ( ) ( ) ˆ 1
ˆ ( ) ,
ˆ ˆ ˆ ˆ
( ) , ( )
( )ˆ ( 1).
1 1
ˆ ( ) log
2 2
c c
c p p p p
p p p
p L p L p
p
L p we have
c c c
p
'
2
'
2
ˆ ˆ
( ) ( )
1 1
ˆ ˆ
( ) log (log )
( 2 ) 2
( )ˆ (log )
ˆ ˆ 1
"( ) 1, "'( ) 0,
ˆ
"( )
n i i i p
L x
x
L x
L L
L
According to Lindly’s approximation L(X) can be written in the form
2
( 2) ( 1)
ˆ 1 ˆ ˆ ˆ ˆ ˆ ˆ
( ) ( ) "( ) 2 '( ) '( ) "'( ). '( ) . 2
ˆ 1 ˆ ˆ ˆ
, ( ) 1 2 ( )
2
c c c
L x p L
Hence L x c c c
.
Finally Bayes estimator under MLIEX loss function is
1
( 2) ( 1)
ˆ ˆ 1 ( 1).ˆ 2 ( ˆ ˆ)
2
c
c c c
BML c c c .
From the above discussion, we estimate Bayes estimate of log-normal distribution’s parameter
, , (01), (011), . .
ˆ ˆ ˆ ˆ ˆ ˆ
asMLE BSE B B BL BMLUnder different types of loss function expect the MLE of
. To compare among the estimators it is too much complicated to estimate risk or mean squared error (MSE) for each of the estimators.According to the definition of risk function we know that the risk of any estimator is the function of only the parameter and needs to be evaluated through multiple integration or different approximation by numerical methods. So we consider the biases of the estimators as a basis of our comparison. The bias of
59
ˆ) E(ˆ) (
Bias
where
ˆ
is the estimate of the real parameter
. The MSE of ˆ
can be defined as
2ˆ ˆ ˆ
( ) ( ) ( )
MSE Var Bias .
From the above equation we can say when bias of the estimator become zero. The MSE of that estimator reduces to the variance of the estimator
.
4. Emperical Study
The estimator of
log-normal distribution ˆMLE,ˆBSE,ˆB(01),ˆB(0 11),ˆBL and ˆ
BML are numerically evaluated on the basis of Monte-Carlo method. Distinct values of the unknown parameter
are considered. The sample size has been varied to check the consistency of the estimators, their biases with respect to change in sample size n and loss function parameter c have been observed. To assure uniformity in comparison, the value c is kept similar for the case of LINEX and MLINEX loss function.
Numerical results are shown in the following tables.
5. Conclusions
Our estimated numerical figures of six different estimators for the parameter
of the log-normal distribution are shown in tabular form. Each of the tables represents the performance of the different estimators. On the basis of numerical study which is represented in the tabular form we may conclude that the bias of all estimates have a tendency to be zero for increasing sample size n .i,e.,all the estimators obey the large sample law. For all cases the maximum likelihood estimators (MLE) give the best result.i.e, MLE has no error. For highly negative values of the loss function parameter c, the use of MLNIEX loss function is superior to other loss function. In the estimation of the parameter of log-normal distribution, the MLE and
ˆ
BMLgive the best estimator. It is also found that MLINEX loss function is better to use.Table 1: Mean and Bias for
of log-normal distribution when 2,c1,1 and for different values of sample size n.n
ˆ
MLE ˆ
BSE ˆ
B(01) ˆ
B(011) ˆ
BL ˆBML05 2.0000
(0.0000)
1.2505 (-0.7495)
1.2505 (-0.7495)
1.2755 (-0.7245)
1.1671 (-0.8329)
1.1429 (-0.8571)
10 2.0000
(0.0000)
1.2860 (-0.7140)
1.2860 (-0.7140)
1.3110 (-0.6890)
1.2405 (-0.7559)
1.1429 (-0.8571)
15 2.0000
(0.0000)
1.3841 (-0.6159)
1.3841 (-0.6159)
1.4091 (-0.5909)
1.3528 (-0.6472)
1.1429 (-0.8571)
20 2.0000
(0.0000)
1.4150 (-0.5850)
1.4150 (-0.5850)
1.4400 (-0.5600)
1.3912 (-0.6088)
1.1429 (-0.8571)
Table 2: Mean and Bias for
of log-normal distribution when 2,c 1,1 and for different values of sample size n.n
ˆ
MLE ˆ
BSE ˆ
B(01) ˆ
B(011) ˆ
BL ˆBML05 2.0000
(0.0000)
1.2505 (-0.7495)
1.2505 (-0.7495)
1.1671 (-0.8329)
1.1672 (-0.8328)
1.0000 (-1.0000)
10 2.0000
(0.0000)
1.2860 (-0.7140)
1.2860 (-0.7140)
1.2405 (-0.7559)
1.2406 (-0.7549)
1.0000 (-1.0000)
15 2.0000
(0.0000)
1.3841 (-0.6159)
1.3841 (-0.6159)
1.3528 (-0.6472)
1.3529 (-0.6471)
1.0000 (-1.0000)
20 2.0000
(0.0000)
1.4150 (-0.5850)
1.4150 (-0.5850)
1.3912 (-0.6088)
1.3912 (-0.6088)
1.0000 (-1.0000) Table 3: Mean and Bias for of log-normal distribution when 1,c 1,1 and for different values
of sample size n.
n
ˆ
MLE ˆ
BSE ˆ
B(01) ˆ
B(011) ˆ
BL ˆBML05 1.0000
(0.0000)
0.3710 (-0.6290)
0.3710 (-0.6290)
0.3959 (-0.6041)
0.2877 (-0.7123)
1.0000 (0.0000)
10 1.0000
(0.0000)
0.4053 (-0.5947)
0.4053 (-0.5947)
0.4303 (-0.5697)
0.3599 (-0.6401)
1.0000 (0.0000)
15 1.0000
(0.0000)
0.4465 (-0.5535)
0.4465 (-0.5535)
0.4778 (-0.5222)
0.4152 (-0.5848)
1.0000 (0.0000)
20 1.0000
(0.0000)
0.4626 (-0.5374)
0.4626 (-0.5374)
0.4876 (-0.5124)
0.4388 (-0.5612)
1.0000 (0.0000) Table 4: Mean and Bias for of log-normal distribution when 1,c 4,1 and for different values
of sample size n.
n
ˆ
MLE ˆ
BSE ˆ
B(01) ˆ
B(011) ˆ
BL ˆBML05 1.0000
(0.0000)
0.3710 (-.6290)
0.3710 (-.6290)
0.3959 (-0.6041)
0.3076 (-0.4924)
0.6148 (-0.3852)
10 1.0000
(0.0000)
0.4053 (-0.5974)
0.4053 (-0.5974)
0.4303
(-0.5697) 0.7689 (-0.2311)
0.6148 (-0.3852)
15 1.0000
(0.0000)
0.4465 (-0.5335)
0.4465 (-0.5335)
0.4715 (-0.5285)
0.6965 (-0.3035)
0.6148 (-0.3852)
20 1.0000
(0.0000)
0.4626 (-0.5374)
0.4626 (-0.5374)
0.4876 (-0.5124)
0.6530 (-0.3470)
0.6148 (-0.3852)
61
Table 5: Mean and Bias for of log-normal distribution when 1.5,c1,1 and for different values of sample size n.
n
ˆ
MLE ˆ
BSE ˆ
B(01) ˆ
B(011) ˆ
BL ˆBML05 2.0000
(-0.50000)
0.4171 (-1.0829)
0.4171 (-1.0829)
0.4421 (-1.0579)
0.3893 (-1.1107)
0.6667 (-0.8333)
10 2.0000
(-0.50000)
0.3769 (-1.1231)
0.3769 (-1.1231)
0.4019 (-1.0981)
0.3617 (-1.1383)
0.6667 (-0.8333)
15 2.0000
(-0.50000)
0.4465 (-1.0535)
0.4465 (-1.0535)
0.4715 (-1.0285)
0.4361 (-1.0639)
0.6667 (-0.8333)
20 2.0000
(-0.50000)
0.4626 (-1.0374)
0.4626 (-1.0374)
0.4626 (-1.0374)
0.4626 (-1.0453)
0.6667 (-0.8333) References:
Arnold Zellner (1986), Bayesian estimation and prediction using asymmetrical loss function. Journal of the American Statistical Association, Vol-81,page-(446-451),June 1986.
C. K.Podder M. K.Roy & S. K.Sinha, Bayesian estimation of the parameter of Maxwell distribution, Journal of Bangladesh J. Sci. &Tech. 3(1): page-(143-150), January2001.
C. K. Podder & M. K. Roy, Bayesian estimation of the parameter of Pareto Distribution, Jahangirnagar University Journal of Science Vol-22&23, Page-(271-280), January2000.
Fisz, Marek. Probability, probability theory & Mathematical Statistics, Johnwiley & Sons, Inc. Third edition.
C. K. Podder, M. K. Roy & Biswas (1998), Bayesian estimation of the parameter of Weibull distribution.
Journal of Bangladesh Academic of sciences, Vol-26, No-2. page-(157-166), 2002.
LINDLEY D. V. Approximate Bayesian Methods, Bayesian Statistics.Balencia-1980.
B. N. Pandey & Omkar Raj. Bayesian estimation of mean square of mean of Normal distribution using LINEX loss function.
Reuven Y. Rubinstein (1981), Simulation of Monte- Carlo method. Wiley Series in probability and Mathematical Statistics.
Wahed, A. S. F. and Uddin, M. Borhan (1994), Bayesian estimation under asymmetrical loss function, Type of loss function, Journal of statistical Research. Vol-16 No-1&2.
Box G. E. P. Droper N. R. (1965), A Bayesian Estimation of common parameters from several process, Biometrika, Vol-52, Page-(355-365).