## Methods of Estimation

Broadly speaking, there are three methods of parameter estimation: (1) least squares (LS), (2) maximum likelihood, and (3) method of moments (MOM) and its extension, the generalized method of moments (GMM). We have devoted considerable time to illustrate the LS method. In Chapter 4 we introduced the ML method in the regression context. But the method is of much broader application.

The key idea behind the ML is the likelihood function. To illustrate this, suppose the random variable X has PDF f(X, 0) which depends on a single parameter 0. We know the PDF (e.g., Bernoulli or binomial) but do not know the parameter value. Suppose we obtain a random sample of nX values. The joint PDF of these n values is:

Because it is a random sample, we can write the preceding joint PDF as a product of the individual PDF as g(x1, x2,..., xn; 0) = f (x1; 0) f fe; 0) ••• f (xn; 0)

The joint PDF has a dual interpretation. If 0 is known, we interpret it as joint probability of observing the given sample values. On the other hand, we can treat it as a function of 0 for given values of x 1, x2, ••• , xn« On the latter interpretation, we call the joint PDF the likelihood function (LF) and write it as

L(0; x1, x2,..., xn) = f (x1; 0) f fe; 0) ••• f (xn; 0)

Observe the role reversal of 0 in the joint probability density function and the likelihood function.

The ML estimator of 0 is that value of 0 that maximizes the (sample) likelihood function, L. For mathematical convenience, we often take the log of the likelihood, called the log-likelihood function (log L). Following the calculus rules of maximization, we differentiate the log-likelihood function with respect to the unknown and equate the resulting derivative to zero. The resulting value of the estimator is called the maximum-likelihood estimator. One can apply the second-order condition of maximization to assure that the value we have obtained is in fact the maximum value.

In case there is more than one unknown parameter, we differentiate the log-likelihood function with respect to each unknown, set the resulting expressions to zero, and solve them simultaneously to obtain the values of the unknown parameters. We have already shown this for the multiple regression model (see the appendix to Chapter 4).

Gujarati: Basic Back Matter Appendix A: A Review of © The McGraw-Hill

Econometrics, Fourth Some Statistical Concepts Companies, 2004

Edition

APPENDIX A: A REVIEW OF SOME STATISTICAL CONCEPTS 899 