On The Three-Parameter Weibull Distribution Shape Parameter Estimation
On The Three-Parameter Weibull Distribution Shape Parameter Estimation
_
x
_
1
e
, (1.1)
and
F
X
(x) = 1 e
, (1.2)
Corresponding author.
404 Mahdi Teimouri and Arjun K. Gupta
respectively, for x > , > 0, > 0. The parameters, , and are known as
the shape, scale and location parameters, respectively. The hazard rate function
corresponding to (1.1) and (1.2) is
H(x) =
_
x
_
1
, (1.3)
for x > . So, the Weibull distribution can allow for decreasing, constant and
increasing hazard rates. This is one of the attractive properties that made the
Weibull distribution so applicable. The non-central moments corresponding to
(1.1) and (1.2) are given by
E (X
r
) =
r
i=0
_
r
i
_
ri
_
1 +
r i
_
, (1.4)
where r > i and () denotes the gamma function.
Many estimation methods have been proposed for estimating the parame-
ters of the Weibull distribution. We mention: maximum likelihood estimation
(Sirvanci and Yang, 1984), moments estimation (Cohen et al., 1984; Cran, 1988),
Bayesian estimation (Tsionas, 2000), quantile estimation (Wang and Keats, 1995),
logarithmic moment estimation (Johnson et al., 1994) and the probability weighted
moment estimation (Bartolucci et al., 1999). The most popular and the most ef-
cient of these is the maximum likelihood estimation.
2. Main Results
Among mentioned inference methods, as the most ecient one which received
a lot of attention in the literature, MLE of Weibull family is derived by solving
the non-linear set of three equations given as follows:
n
+
n
i=1
log
_
x
i
i=1
_
x
i
log
_
x
i
_
= 0, (2.1)
i=1
_
x
i
= 0, (2.2)
( 1)
n
i=1
1
x
i
+
i=1
_
x
i
_
1
= 0. (2.3)
These equations do not yield closed form solutions for parameters.
Theorem 2.1 is useful for constructing a simple, consistent and closed form
estimator for . This estimator is independent of .
On The Three Parameter Weibull Distribution 405
Theorem 2.1. Suppose x
1
, x
2
, , x
n
is a random sample from (1.1). Let
denote the sample correlation between the x
i
and their ranks. Let CV and
S, respectively, denote the sample coecient of variation and sample standard
deviation. Then,
=
_
X
__
1
2
1
2
1+1/
_
12(n 1)
(n + 1)
.
Proof. From Stuart (1954), the correlation between X
i
, and their ranks, say R
i
,
is:
= corr (X
i
, R
i
) =
__
xF
X
(x)dF
X
(x)
X
2
_
12(n 1)
2
X
(n + 1)
, (2.4)
where
X
= E(X) and
2
X
= var(X). Using (1.1) and (1.2), we have:
_
xF
X
(x)dF
X
(x)
=
x
_
1 exp
_
_
x
___
x
_
1
exp
_
_
x
_
dx
=
_
0
(z +)
_
1 exp
_
_
z
___
z
_
1
exp
_
_
z
_
dz
=
_
0
(z +)
_
z
_
1
exp
_
_
z
_
dz
_
0
(z +)
_
z
_
1
exp
_
2
_
z
_
dz
= I
1
+I
2
, (2.5)
where
I
1
=
_
0
(z +)
_
z
_
1
exp
_
_
z
_
dz = +(1 + 1/), (2.6)
and
I
2
=
_
0
(z +)
_
z
_
1
exp
_
2
_
z
_
dz =
(1 + 1/)
2
1+1/
2
. (2.7)
So,
_
xF
X
(x)dF
X
(x)
X
2
= I
1
+I
2
X
2
= (1 + 1/)
_
1
2
1
2
1+1/
_
= (
X
)
_
1
2
1
2
1+1/
_
. (2.8)
406 Mahdi Teimouri and Arjun K. Gupta
Now, divide and simultaneously multiply by, respectively,
X
and
_
12(n1)
n+1
the
both sides of (2.8) to see the result. 2
Corollary 2.1. Suppose x
1
, x
2
, , x
n
is a random sample from (1.1) with
known location parameter. Let denote the sample correlation between the
x
i
and their ranks. Let CV and S, respectively, denote the sample coecient
of variation and sample standard deviation. Then an estimator for the shape
parameter, is:
=
ln 2
ln
_
1
3
_
1
CV
S
_
1
_
n+1
n1
_. (2.9)
From Johnson et al. (1994, p. 656), in some certain cases, it is well known that
X
(1)
= min{X
1
, X
2
, , X
n
} is MLE for . Generally, this statistic is a consistent
estimator of location parameter (see Kundu and Raqab, 2009). A better estimate
is X
(1)
1/n (see Sirvanci and Yang, 1984, p. 74). We take the latter statistic as
an estimator of the unknown parameter , i.e., we let = X
(1)
1/n. It can be
used for constructing a new and -independent estimator of shape parameter,
as follows.
Corollary 2.2. Suppose x
1
, x
2
, , x
n
is a random sample from (1.1) with
unknown location parameter. Let denote the sample correlation between the x
i
and their ranks. Let CV , S and X
(1)
, respectively, denote the sample coecient
of variation, sample standard deviation and minimum statistic. Then a new
estimator for the shape parameter, is:
=
ln 2
ln
_
1
3
_
1
CV
X
(1)
1/n
S
_
1
_
n+1
n1
_. (2.10)
3. Performance Analysis
Here we analyze the performance of the new estimator given in (2.10). For
the sake of simplicity, let us to consider two cases: 1 is known and 2 is
unknown.
Case 1: In this case, after subtracting from all the points of data set, the
problem is reduced to estimating the shape parameter of two-parameter Weibull
distribution. Now the new estimator (2.9) depends only on the CV statistic and
is:
=
ln 2
ln
_
1
3
CV
_
n+1
n1
_. (3.1)
On The Three Parameter Weibull Distribution 407
Therefore, performance of the new estimator can be discussed by analyzing
the sample coecient of variation. The use and applications of the sample coef-
cient of variation have a long history (He and Oyadihi, 2001; Tian, 2005). For
example, the dierence between two populations can be tested by comparing the
two sample coecients of variation based on independent samples gathered from
the populations. Due to this importance, tests for the coecient of variation
have been developed (Gupta and Ma, 1996). An approximate (1 ) 100%
condence interval for
2
=
2
/||
2
is
_
_
0,
_
(1 +b
2
)
2
n1,1
nb
2
1
_
1
_
_
, (3.2)
where b = S/
and
2
n1,1
denotes the upper (1) percentile of a chi-square
distribution with n 1 degrees of freedom. Under the normality assumption, we
have var(b) =
2
(1 + 2
2
)/(2n). Similar result holds for two-sided inference.
So, because of asymptotic unbiasedness, the condence interval in (3.2) becomes
degenerate for large sample size.
Without the normality assumption, results like (3.2) are not possible. How-
ever, Cramer (1964) pointed out that the variance of b can be given as:
var(b) =
2
1
(
4
2
2
) 4
1
3
+ 4
3
2
4n
4
1
2
, (3.3)
where
r
denotes the rth central moment of b. If observations are from (1.1) then
b = S/
X. Johnson et al. (1994, pp. 633-634) have shown that the variance of b
is very small for 0.1 and for all admissible values of , so, the new estimator
given by (3.1) can be expected to perform well.
Case 2: Unfortunately, it is not straightforward to prove unbiasedness and
consistency of new estimator (2.10) in this case. Let us to investigate the prop-
erties of the estimator X
(1)
1/n and S. These estimator are consistent for
and
X
, respectively. Statistically, the ratios (X
(1)
1/n)/S and S/
X converge,
respectively, to /
X
and 1/CV in probability. In the other word, the given in
(2.10) converges in probability to . This guarantees that new estimator works
well when a sample size is large.
3.1. Simulation Study
Here, we compare the performance of the maximum likelihood estimate and
the new estimate for the shape parameter of (1.1). The comparison is based on
the mean relative error (MRE) criterion dened by
MRE =
1
k
k
i=1
, (3.4)
408 Mahdi Teimouri and Arjun K. Gupta
where
i
denotes the value of either new estimator or MLE in ith iteration.
First, note that here after we consider that is unknown and we call (2.10) as
new estimator.
To establish a comprehensive simulation-based study for measuring the e-
ciency of new estimator comparison with the MLE, MRE is computed for sample
sizes of 100, 500, 1000 and 2000 when is set on 5. Larger values of the MRE
correspond to less ecient estimator. Figure 1 displays the MREs for a sample
of size n = 100, 500, 1000 and 2000 for some levels of = 0.5, 5 and = 0. The
MREs for a sample of size n = 100, 500, 1000 and 2000 for some levels of =
0.5, 5 and = 10 are shown in Figure 2. It should be noted that, here, we set
the number of iterations k, at 100.
The following observations can be made from Figures 1 and 2:
1. in all cases the dierence between MRE of the New estimator and the MLE
is not signicant.
2. because the New estimator is scale invariant, totally, dierence between
MREs of the New estimator and MLE is not subject to the scale parameter.
3. in each row of Figures, when increases from 0.5 to 5, MREs have no
signicant changes, totally.
4. in each column of Figures, when n increases from 100 to 2000, MREs de-
crease with increasing n, totally.
5. when increases from 0 to 10, comparing Figures 1 and 2, it turns out that
degree of dependence of MREs of two estimators on is negligible.
Further discussions show that both new estimator and MLE behaves the same
when bias is considered as criterion. The mode of two estimators occur at origin.
Although the MLE has higher peak than the new estimator in origin, but simula-
tions show that the sample range of bias of the New estimator and the MLE are
approximately equal. Also, normality of the New estimator is veried even form
small sample size (here n = 100). The bias frequency histogram are depicted in
Figure 3 for some selected levels of and when location parameter , is set
on 10. The histogram is constructed from 500 points, with account taken of the
fact that each point is obtained via the MLE or new estimator on the basis of a
sample of size 100 generated from (1.1).
3.2. Examples
In this subsection, we provide two data sets to show how much the new
estimator works well. For this mean, we address the readers to data sets is
assumed to be distributed with Weibull law (see Murthy et al., 2004, pp. 83,
100). The data sets are given in Tables 1 and 2 as follows. The results for tting
On The Three Parameter Weibull Distribution 409
0 5 10 15
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Alpha
M
R
E
Beta=0.5 & n=100
New estimator
MLE
0 5 10 15
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Alpha
M
R
E
Beta=5 & n=100
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
Alpha
M
R
E
Beta=0.5 & n=500
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
Alpha
M
R
E
Beta=5 & n=500
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Alpha
M
R
E
Beta=0.5 & n=1000
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
Alpha
M
R
E
Beta=5 & n=1000
MLE
New estimator
0 5 10 15
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
Alpha
M
R
E
Beta=0.5 & n=2000
New estimator
MLE
0 5 10 15
0
0.005
0.01
0.015
0.02
0.025
0.03
Alpha
M
R
E
Beta=5 & n=2000
MLE
New estimator
Figure 1: MRE of the New estimator and MLE for some levels of and when
= 0
410 Mahdi Teimouri and Arjun K. Gupta
0 5 10 15
0
0.05
0.1
0.15
0.2
Alpha
M
R
E
Beta=0.5 & n=100
MLE
New estimator
0 5 10 15
0
0.05
0.1
0.15
0.2
Alpha
M
R
E
Beta=5 & n=100
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
Alpha
M
R
E
Beta=0.5 & n=500
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
Alpha
M
R
E
Beta=5 & n=500
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
Alpha
M
R
E
Beta=0.5 & n=1000
New estimator
MLE
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
0.06
Alpha
M
R
E
Beta=5 & n=1000
MLE
New estimator
0 5 10 15
0
0.01
0.02
0.03
0.04
0.05
Alpha
M
R
E
Beta=0.5 & n=2000
MLE
New estimator
0 5 10 15
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
Alpha
M
R
E
Beta=5 & n=2000
MLE
New estimator
Figure 2: MRE of the New estimator and MLE for some levels of and when
= 10
On The Three Parameter Weibull Distribution 411
(MLE) (New estimator)
0.08 0.06 0.04 0.02 0 0.02 0.04 0.06 0.08 0.1
0
50
100
150
200
250
300
350
400
450
0.08 0.06 0.04 0.02 0 0.02 0.04 0.06 0.08 0.1
0
20
40
60
80
100
120
( = 1, = 1) ( = 1, = 1)
(MLE) (New estimator)
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6
0
50
100
150
200
250
300
350
400
450
0.6 0.4 0.2 0 0.2 0.4 0.6 0.8
0
20
40
60
80
100
120
140
( = 5, = 5) ( = 5, = 5)
Figure 3: Bias frequency of the New estimator and the MLE for some levels
of and
the three-parameter Weibull distribution to the data sets of Tables 1 and 2 are
given, respectively, in Tables 3 and 4. It should be noted that, after estimating ,