Info

Notes: Column (1): Actual Y values from Table 7.1

Column (2): Estimated Y values from the linear model (7.8.8) Column (3): Estimated log Y values from the double-log model (7.8.9) Column (4): Antilog of values in column (3) Column (5): Log values of Y in column (1) Column (6): Log values ofY( in column (2)

Notes: Column (1): Actual Y values from Table 7.1

Column (2): Estimated Y values from the linear model (7.8.8) Column (3): Estimated log Y values from the double-log model (7.8.9) Column (4): Antilog of values in column (3) Column (5): Log values of Y in column (1) Column (6): Log values ofY( in column (2)

222 PART ONE: SINGLE-EQUATION REGRESSION MODELS

Allocating R2 among Regressors

Let us return to our child mortality example. We saw in (7.6.2) that the two regressors PGNP and FLR explain 0.7077 or 70.77 percent of the variation in child mortality. But now consider the regression (7.7.2) where we dropped the FLR variable and as a result the r2 value dropped to 0.1662. Does that mean the difference in the r2 value of 0.5415 (0.7077 — 0.1662) is attributable to the dropped variable FLR? On the other hand, if you consider regression (7.7.3), where we dropped the PGNP variable, the r2 value drops to 0.6696. Does that mean the difference in the r2 value of 0.0381 (0.7077 — 0.6696) is due to the omitted variable PGNP?

The question then is: Can we allocate the multiple R2 of 0.7077 between the two regressors, PGNP and FLR, in this manner? Unfortunately, we cannot do so, for the allocation depends on the order in which the regressors are introduced, as we just illustrated. Part of the problem here is that the two regressors are correlated, the correlation coefficient between the two being 0.2685 (verify it from the data given in Table 6.4). In most applied work with several regressors, correlation among them is a common problem. Of course, the problem will be very serious if there is perfect collinear-ity among the regressors.

The best practical advice is that there is little point in trying to allocate the R2 value to its constituent regressors.

In concluding this section, a warning is in order: Sometimes researchers play the game of maximizing R2, that is, choosing the model that gives the highest R2. But this may be dangerous, for in regression analysis our objective is not to obtain a high R2 per se but rather to obtain dependable estimates of the true population regression coefficients and draw statistical inferences about them. In empirical analysis it is not unusual to obtain a very high R2 but find that some of the regression coefficients either are statistically insignificant or have signs that are contrary to a priori expectations. Therefore, the researcher should be more concerned about the logical or theoretical relevance of the explanatory variables to the dependent variable and their statistical significance. If in this process we obtain a high R2, well and good; on the other hand, if R2 is low, it does not mean the model is necessarily bad.14

14Some authors would like to deemphasize the use of R2 as a measure of goodness of fit as well as its use for comparing two or more R2 values. See Christopher H. Achen, Interpreting and Using Regression, Sage Publications, Beverly Hills, Calif., 1982, pp. 58-67, and C. Granger and P. Newbold, "R2 and the Transformation of Regression Variables," Journal of Econometrics, vol. 4, 1976, pp. 205-210. Incidentally, the practice of choosing a model on the basis of highest R2, a kind of data mining, introduces what is known as pretest bias, which might destroy some of the properties of OLS estimators of the classical linear regression model. On this topic, the reader may want to consult George G. Judge, Carter R. Hill, William E. Griffiths, Helmut Lutkepohl, and Tsoung-Chao Lee, Introduction to the Theory and Practice of Econometrics, John Wiley, New York, 1982, Chap. 21.

The "Game" of Maximizing R2

As a matter of fact, Goldberger is very critical about the role of R2. He has said:

From our perspective, R2 has a very modest role in regression analysis, being a measure of the goodness of fit of a sample LS [least-squares] linear regression in a body of data. Nothing in the CR [CLRM] model requires that R2 be high. Hence a high R2 is not evidence in favor of the model and a low R2 is not evidence against it.

In fact the most important thing about R2 is that it is not important in the CR model. The CR model is concerned with parameters in a population, not with goodness of fit in the sample. . . . If one insists on a measure of predictive success (or rather failure), then a2 might suffice: after all, the parameter a2 is the expected squared forecast error that would result if the population CEF [PRF] were used as the predictor. Alternatively, the squared standard error of forecast... at relevant values of x [regressors] may be informative.15

7.9 EXAMPLE 7.3: THE COBB-DOUGLAS PRODUCTION FUNCTION: MORE ON FUNCTIONAL FORM

In Section 6.4 we showed how with appropriate transformations we can convert nonlinear relationships into linear ones so that we can work within the framework of the classical linear regression model. The various transformations discussed there in the context of the two-variable case can be easily extended to multiple regression models. We demonstrate transformations in this section by taking up the multivariable extension of the two-variable log-linear model; others can be found in the exercises and in the illustrative examples discussed throughout the rest of this book. The specific example we discuss is the celebrated Cobb-Douglas production function of production theory.

The Cobb-Douglas production function, in its stochastic form, may be expressed as where Y = output

X2 = labor input X3 = capital input u = stochastic disturbance term e = base of natural logarithm

From Eq. (7.9.1) it is clear that the relationship between output and the two inputs is nonlinear. However, if we log-transform this model, we obtain:

224 PART ONE: SINGLE-EQUATION REGRESSION MODELS

ln Yi = lnßi + ß2 ln^ + ß3 lnXM + Ui = ßo + ß2 ln X2i + ß3 ln X3i + Ui

Thus written, the model is linear in the parameters 0, 2, and 3 and is therefore a linear regression model. Notice, though, it is nonlinear in the variables Y and X but linear in the logs of these variables. In short, (7.9.2) is a log-log, double-log, or log-linear model, the multiple regression counterpart of the two-variable log-linear model (6.5.3).

The properties of the Cobb-Douglas production function are quite well known:

1. 2 is the (partial) elasticity of output with respect to the labor input, that is, it measures the percentage change in output for, say, a 1 percent change in the labor input, holding the capital input constant (see exercise 7.9).

2. Likewise, 3 is the (partial) elasticity of output with respect to the capital input, holding the labor input constant.

3. The sum (fa + fa) gives information about the returns to scale, that is, the response of output to a proportionate change in the inputs. If this sum is 1, then there are constant returns to scale, that is, doubling the inputs will double the output, tripling the inputs will triple the output, and so on. If the sum is less than 1, there are decreasing returns to scale—doubling the inputs will less than double the output. Finally, if the sum is greater than 1, there are increasing returns to scale—doubling the inputs will more than double the output.

Before proceeding further, note that whenever you have a log-linear regression model involving any number of variables the coefficient of each of the X variables measures the (partial) elasticity of the dependent variable Y with respect to that variable. Thus, if you have a k-variable log-linear model:

each of the (partial) regression coefficients, fa through fa, is the (partial) elasticity of Y with respect to variables X2 through Xk.16

To illustrate the Cobb-Douglas production function, we obtained the data shown in Table 7.3; these data are for the agricultural sector of Taiwan for 1958-1972.

Assuming that the model (7.9.2) satisfies the assumptions of the classical linear regression model,17 we obtained the following regression by the OLS

16To see this, differentiate (7.9.3) partially with respect to the log of each X variable. Therefore, d ln Y/d lnX2 = (dY/dX2)(X2/Y) = fi2, which, by definition, is the elasticity of Y with respect to X2 and d ln Y/d lnX3 = (dY/dX3)(X3/Y) = ,63, which is the elasticity of Y with respect to X3, and so on.

17Notice that in the Cobb-Douglas production function (7.9.1) we have introduced the stochastic error term in a special way so that in the resulting logarithmic transformation it enters in the usual linear form. On this, see Sec. 6.9.

ln Yi = ßo + ß2 lnX2i + ß3 lnX3î- + ••• + ßk ln Xh + u (7.9.3)

CHAPTER SEVEN: MULTIPLE REGRESSION ANALYSIS: THE PROBLEM OF ESTIMATION 225

TABLE 7.3 REAL GROSS PRODUCT, LABOR DAYS, AND REAL CAPITAL INPUT IN THE AGRICULTURAL SECTOR OF TAIWAN, 1958-1972

Real gross product Labor days Real capital input

Year (millions of NT $)*, Y (millions of days), X2 (millions of NT $), X3

TABLE 7.3 REAL GROSS PRODUCT, LABOR DAYS, AND REAL CAPITAL INPUT IN THE AGRICULTURAL SECTOR OF TAIWAN, 1958-1972

Real gross product Labor days Real capital input

Year (millions of NT $)*, Y (millions of days), X2 (millions of NT $), X3

Was this article helpful?

0 0
Rules Of The Rich And Wealthy

Rules Of The Rich And Wealthy

Learning About The Rules Of The Rich And Wealthy Can Have Amazing Benefits For Your Life And Success. Discover the hidden rules and beat the rich at their own game. The general population has a love / hate kinship with riches. They resent those who have it, but spend their total lives attempting to get it for themselves. The reason an immense majority of individuals never accumulate a substantial savings is because they don't comprehend the nature of money or how it works.

Get My Free Ebook


Post a comment