## Example

Assume that the random variable X follows the Poisson distribution with the mean value of X. Suppose x1, x2 xn are independent Poisson random variables each with mean X. Suppose we want to find out the ML estimator of X. The likelihood function here is:

e-kkx1 e-kkx2 e-kkXn

This is a rather unwieldy-expression, but if we take its log, it becomes log (x^ x2 xn; k) = -nk + ^ x log X - log c where logc =nX!. Differentiating the preceding expression with respect to k, we obtain (-n + (£ x, )/k). By setting this last expression to zero, we obtain kmi = (J2Xi)/n = X, which is the ML estimator of the unknown k.

The Method of Moments. We have given a glimpse of MOM in exercise 3.4 in the so-called analogy principle in which the sample moments try to duplicate the properties of their population counterparts. The GMM, which is a generalization of MOM, is now becoming more popular, but not at the introductory level. Hence we will not pursue it here.

The desirable statistical properties fall into two categories: small-sample, or finite-sample, properties and large-sample, or asymptotic, properties. Underlying both these sets of properties is the notion that an estimator has a sampling, or probability, distribution.

Small-Sample Properties

Unbiasedness. An estimator 0 is said to be an unbiased estimator of 0 if the expected value of 0 is equal to the true 0; that is,

If this equality does not hold, then the estimator is said to be biased, and the bias is calculated as bias(0) = E(0) - 0

Of course, if E(0) = 0—that is, 0 is an unbiased estimator—the bias is zero.

Geometrically, the situation is as depicted in Figure A.8. In passing, note that unbiasedness is a property of repeated sampling, not of any given sample: keeping the sample size fixed, we draw several samples, each time obtaining an estimate of the unknown parameter. The average value of these

900 APPENDIX A: A REVIEW OF SOME STATISTICAL CONCEPTS

900 APPENDIX A: A REVIEW OF SOME STATISTICAL CONCEPTS

FIGURE A.8 Biased and unbiased estimators.
FIGURE A.9 Distribution of three estimators of e.

estimates is expected to be equal to the true value if the estimator is to be unbiased.

Minimum Variance. 01 is said to be a minimum-variance estimator of

0 if the variance of 01 is smaller than or at most equal to the variance of 02, which is any other estimator of 0. Geometrically, we have Figure A.9, which shows three estimators of 0, namely 01, 02, and 03, and their probability distributions. As shown, the variance of 03 is smaller than that of either 01 or 02. Hence, assuming only the three possible estimators, in this case 03 is a minimum-variance estimator. But note that 03 is a biased estimator (why?).

Best Unbiased, or Efficient, Estimator. If 01 and 02 are two unbiased estimators of 0, and the variance of 01 is smaller than or at most equal to the variance of 02, then 01 is a minimum-variance unbiased, or best unbiased, or efficient, estimator. Thus, in Figure A.9, of the two unbiased estimators

01 and 02, 01 is best unbiased, or efficient.

Linearity. An estimator 6 is said to be a linear estimator of 6 if it is a linear function of the sample observations. Thus, the sample mean defined as is a linear estimator because it is a linear function of the X values.

Best Linear Unbiased Estimator (BLUE). If 6 is linear, is unbiased, and has minimum variance in the class of all linear unbiased estimators of 6, then it is called a best linear unbiased estimator, or BLUE for short.

Minimum Mean-Square-Error (MSE) Estimator. The MSE of an estimator 6 is defined as

The difference between the two is that var (0) measures the dispersion of the distribution of 6 around its mean or expected value, whereas MSE(6) measures dispersion around the true value of the parameter. The relationship between the two is as follows:

= E[6 — E(6)]2 + E[E(6) — 6]2 + 2 E[6 — E(9)][E(§) — 6] = E[ 6 — E( 6)]2 + E[ E( 6) — 6]2 since the last term is zero6 = var( 6) + bias( <9)2 = variance of 6 plus square bias

The minimum MSE criterion consists in choosing an estimator whose MSE is the least in a competing set of estimators. But notice that even if such an estimator is found, there is a tradeoff involved—to obtain minimum variance you may have to accept some bias. Geometrically, the situation is as shown in Figure A.i0. In this figure, 62 is slightly biased, but its variance is smaller than that of the unbiased estimator 6i. In practice, however, the minimum MSE criterion is used when the best unbiased criterion is incapable of producing estimators with smaller variances.

MSE(6) = E( 6 — 6)2 This is in contrast with the variance of 6, which is defined as var( 0) = E[ 6 — E( 6)]2

6The last term can be written as 2{[E(6)]2 — [E(6)]2 — 6E(6) + 6E(6)} = 0. Also note that E[E( 6) — 6]2 = [E( 6) — 6]2 , since the expected value of a constant is simply the constant itself.

Gujarati: Basic Back Matter Appendix A: A Review of © The McGraw-Hill

Econometrics, Fourth Some Statistical Concepts Companies, 2004

Edition

902 APPENDIX A: A REVIEW OF SOME STATISTICAL CONCEPTS

902 APPENDIX A: A REVIEW OF SOME STATISTICAL CONCEPTS

Estimators of

Estimators of

E(eo

FIGURE A.10 Tradeoff between bias and variance.