Ols Estimation In The Presence Of Autocorrelation

What happens to the OLS estimators and their variances if we introduce autocorrelation in the disturbances by assuming that E(utut+s) = 0 (s = 0) but retain all the other assumptions of the classical model?8 Note again that

8If 5 = 0, we obtain E(u^). Since E(ut) = 0 by assumption, £(u?) will represent the variance of the error term, which obviously is nonzero (why?).

450 PART TWO: RELAXING THE ASSUMPTIONS OF THE CLASSICAL MODEL

we are now using the subscript t on the disturbances to emphasize that we are dealing with time series data.

We revert once again to the two-variable regression model to explain the basic ideas involved, namely, Yt = j1 + j2 Xt + ut. To make any headway, we must assume the mechanism that generates ut, for E(utut+s) = 0 (s = 0) is too general an assumption to be of any practical use. As a starting point, or first approximation, one can assume that the disturbance, or error, terms are generated by the following mechanism.

where p ( = rho) is known as the coefficient of autocovariance and where et is the stochastic disturbance term such that it satisfied the standard OLS assumptions, namely,

In the engineering literature, an error term with the preceding properties is often called a white noise error term. What (12.2.1) postulates is that the value of the disturbance term in period t is equal to rho times its value in the previous period plus a purely random error term.

The scheme (12.2.1) is known as Markov first-order autoregressive scheme, or simply a first-order autoregressive scheme, usually denoted as AR(1). The name autoregressive is appropriate because (12.2.1) can be interpreted as the regression of ut on itself lagged one period. It is first order because ut and its immediate past value are involved; that is, the maximum lag is 1. If the model were ut = p1ut-1 + p2ut-2 + et, it would be an AR(2), or second-order, autoregressive scheme, and so on. We will examine such higher-order schemes in the chapters on time series econometrics in Part V.

In passing, note that rho, the coefficient of autocovariance in (12.2.1), can also be interpreted as the first-order coefficient of autocorrelation, or more accurately, the coefficient of autocorrelation at lag 1.9

9This name can be easily justified. By definition, the (population) coefficient of correlation between ut and ut-i is ut = put-i + £t -1 < p < 1

since E(ut) = 0 for each t and var(ut) = var(ut-1) because we are retaining the assumption of homoscedasticity. The reader can see that p is also the slope coefficient in the regression of ut on ut-1.

Gujarati: Basic I II. Relaxing the I 12. Autocorrelation: What I I © The McGraw-Hill

Econometrics, Fourth Assumptions of the Happens if the Error Terms Companies, 2004

Edition Classical Model are Correlated?

CHAPTER TWELVE: AUTOCORRELATION 451

Given the AR(1) scheme, it can be shown that (see Appendix 12A, Section 12A.2)

where cov(ut, ut+s) means covariance between error terms s periods apart and where cor(ut, ut+s) means correlation between error terms s periods apart. Note that because of the symmetry property of covariances and correlations, cov (ut, ut+s) = cov (ut, ut—s) and cor(ut, ut+s) = cor(ut, ut—).

Since p is a constant between —1 and +1, (12.2.3) shows that under the AR(1) scheme, the variance of ut is still homoscedastic, but ut is correlated not only with its immediate past value but its values several periods in the past. It is critical to note that |p | < 1, that is, the absolute value of rho is less than one. If, for example, rho is one, the variances and covariances listed above are not defined. If |p| < 1, we say that the AR(1) process given in (12.2.1) is stationary; that is, the mean, variance, and covariance of ut do not change over time. If |p| is less than one, then it is clear from (12.2.4) that the value of the covariance will decline as we go into the distant past. We will see the utility of the preceding results shortly.

One reason we use the AR(1) process is not only because of its simplicity compared to higher-order AR schemes, but also because in many applications it has proved to be quite useful. Additionally, a considerable amount of theoretical and empirical work has been done on the AR(1) scheme.

Now return to our two-variable regression model: Yt = 01 + 02 Xt + ut. We know from Chapter 3 that the OLS estimator of the slope coefficient is

and its variance is given by a 2

Ex where the small letters as usual denote deviation from the mean values.

Now under the AR(1) scheme, it can be shown that the variance of this estimator is:

1 , 1 Extxt-1 , i 2E xtxt-2 , 1 x1 xn 1 + 2--+ 2p 2--1-----+ 2pn 1

xt2 xt2 xt2

Was this article helpful?

0 0
Rules Of The Rich And Wealthy

Rules Of The Rich And Wealthy

Learning About The Rules Of The Rich And Wealthy Can Have Amazing Benefits For Your Life And Success. Discover the hidden rules and beat the rich at their own game. The general population has a love / hate kinship with riches. They resent those who have it, but spend their total lives attempting to get it for themselves. The reason an immense majority of individuals never accumulate a substantial savings is because they don't comprehend the nature of money or how it works.

Get My Free Ebook


Post a comment