Note: * indicates that the elasticity is variable, depending on the value taken by X or Yor both. When no X and Yvalues are specified, in practice, very often these elasticities are measured at the mean values of these variables, namely, X and y.

3. The coefficients of the model chosen should satisfy certain a priori expectations. For example, if we are considering the demand for automobiles as a function of price and some other variables, we should expect a negative coefficient for the price variable.

4. Sometime more than one model may fit a given set of data reasonably well. In the modified Phillips curve, we fitted both a linear and a reciprocal model to the same data. In both cases the coefficients were in line with prior expectations and they were all statistically significant. One major difference was that the r2 value of the linear model was larger than that of the reciprocal model. One may therefore give a slight edge to the linear model over the reciprocal model. But make sure that in comparing two r2 values the dependent variable, or the regressand, of the two models is the same; the regressor(s) can take any form. We will explain the reason for this in the next chapter.

5. In general one should not overemphasize the r2 measure in the sense that the higher the r2 the better the model. As we will discuss in the next chapter, r2 increases as we add more regressors to the model. What is of greater importance is the theoretical underpinning of the chosen model, the signs of the estimated coefficients and their statistical significance. If a model is good on these criteria, a model with a lower r2 may be quite acceptable. We will revisit this important topic in greater depth in Chapter 13.


Consider the following regression model, which is the same as (6.5.1) but without the error term:

For estimation purposes, we can express this model in three different forms:

f 2 eui

Taking the logarithms on both sides of these equations, we obtain ln Yi = a + f2 ln Xi + ln ui ln Yi = a + P2 ln Xi + ui



Models like (6.9.2) are intrinsically linear (in-parameter) regression models in the sense that by suitable (log) transformation the models can be made linear in the parameters a and p2. (Note: These models are nonlinear in ft.) But model (6.9.4) is intrinsically nonlinear-in-parameter. There is no simple way to take the log of (6.9.4) because ln (A + B) = ln A + ln B.

Although (6.9.2) and (6.9.3) are linear regression models and can be estimated by OLS or ML, we have to be careful about the properties of the stochastic error term that enters these models. Remember that the BLUE property of OLS requires that ui has zero mean value, constant variance, and zero autocorrelation. For hypothesis testing, we further assume that ui follows the normal distribution with mean and variance values just discussed. In short, we have assumed that ui ~ N(0, a2).

Now consider model (6.9.2). Its statistical counterpart is given in (6.9.2a). To use the classical normal linear regression model (CNLRM), we have to assume that ln ui - N(0, a2) (6.9.5)

Therefore, when we run the regression (6.9.2a), we will have to apply the normality tests discussed in Chapter 5 to the residuals obtained from this regression. Incidentally, note that if ln ui follows the normal distribution with zero mean and constant variance, then statistical theory shows that ui in (6.9.2) must follow the log-normal distribution with mean ea /2 and variance ea (ea — 1).

As the preceding analysis shows, one has to pay very careful attention to the error term in transforming a model for regression analysis. As for (6.9.4), this model is a nonlinear-in-parameter regression model and will have to be solved by some iterative computer routine. Model (6.9.3) should not pose any problems for estimation.

To sum up, pay very careful attention to the disturbance term when you transform a model for regression analysis. Otherwise, a blind application of OLS to the transformed model will not produce a model with desirable statistical properties.

Was this article helpful?

0 0
Rules Of The Rich And Wealthy

Rules Of The Rich And Wealthy

Learning About The Rules Of The Rich And Wealthy Can Have Amazing Benefits For Your Life And Success. Discover the hidden rules and beat the rich at their own game. The general population has a love / hate kinship with riches. They resent those who have it, but spend their total lives attempting to get it for themselves. The reason an immense majority of individuals never accumulate a substantial savings is because they don't comprehend the nature of money or how it works.

Get My Free Ebook

Post a comment