(see Spanos (1985a)). The non-systematic component is defined by ut = yt-E(yt/s7t__,), (23.8)

where = cr(Zt0_1,X( = xt), and satisfies the following properties:

(ET1) E(u,) = E{E(ut/&t_1)}= 0. (ET 2) E(utus) = E{E(utuJ.¥x_l)} ='

where

These two properties show that m, is a martingale difference process relative to .iFt with bounded variance, i.e. an innovation process (see Section 8). Moreover, the non-systematic component is also orthogonal to the systematic component, i.e.

The properties ET1-ET3 can be verified directly using the properties of the conditional expectation discussed in Section 7.2. In view of the equality

/or U(°_!=(«,_!,!/,_2,.. .,«!), i.e. ut is not predictable from its own past.

This property extends the notion of a white-noise process encountered so far (see Granger (1980)).

As argued in Chapter 17, the parameters of interest are the parameters in terms of which the statistical GM is defined unless stated otherwise. These parameters should be defined more precisely as statistical parameters of interest with the theoretical parameters of interest being functions of the former. In the present context the statistical parameters of interest are 9* = H(^) where 9*=(fi0, fa, ..., txt, ..., am, a2).

The normality of D(Z(/Z,°; \p) implies that \px and \p2 are variation free and thus X, is weakly exogenous with respect to 9. This suggests that 9* can be estimated efficiently without any reference to the marginal distribution D(\t/Z(°_!; ip2). The presence of Y°_ t in this marginal distribution, however, raises questions in the context of prediction because of the feedback from the lagged yts. In order to be able to treat the x,s as given when predicting y, we need to ensure that no such feedback exists. For this purpose we need to assume that

we can deduce that

/J>|\; Zr , ; ip2) = D(Xt/X°_ t ; i//2), t = m+\,...,T (23.14)

i.e. y, does not Granger cause X, (see Engle et al. ( 1983) for a more detailed discussion). Weak exogeneity of X, with respect to 0* when supplemented with Granger non-causality, as defined in (14), is called strong exogeneity. Note that in the present context Granger non-causality is equivalent to a21(i) = 0, ¡=1,2,..., m (23.15)

in (1), which suggests that the assumption is testable.

In the case of the linear regression model it was argued that, although the joint distribution D(Zt; ijf) was used to motivate the statistical model, its specification can be based exclusively on the conditional distribution D(yt/Xt; t^). The same applies to the specification of the dynamic linear regression model which can be based exclusively on D(yt/Z°_l, X,; t/rj). In such a case, however, certain restrictions need to be imposed on the parameters of the statistical generating mechanism (GM):

In particular we need to assume that the parameters (a,, a2,..., am) satisfy the restriction that all the roots of the polynomial m - 1 \

lie inside the unit circle, i.e. |/,-|< 1, i = 1,2,... ,m(see Dhrymes (1978)). This restriction is necessary to ensure that {y,,feT} as generated by (16) is indeed an asymptotically independent stationary stochastic process. In the case where m= 1 the restriction is |a|< 1 which ensures that

, /1 —ot2'\ Cov(y(yI+t) = a/atl j—y l->0 ast^cc

(see Chapter 8). It is important to note that in the case where {Z„ t e T} is assumed to be both stationary and asymptotically independent the above restriction on the roots of the polynomial in (17) is satisfied automatically.

For notational convenience let us rewrite the statistical GM (16) in the more concise form yt = P*'X* + ut, t>m, (23.18)

where r = («!, a2,..., am, p'o, P'J: m(k + 1) x 1,

X* =(>■,_!, y(-2,...,yt-m, x;_J': m(k + 1) x 1.

For the sample period t = m+ 1,..., T, (18) can be written in the following matrix form:

where y: (T — m) x 1, X*: (T — m) x m(k+ 1). Note that x, is k x 1 because it includes the constant as well but x(_(, (= 1,2,..., m, are (k — 1) x 1 vectors; this convention is adopted to simplify the notation. Looking at (18) and (19) the discerning reader will have noticed a purposeful attempt to use notation which relates the dynamic linear regression model to the linear and stochastic linear regression models. Indeed, the statistical GM in (18) and (19) is a hybrid of the statistical GM's of these models. The part i ai>'r-i directly related to the stochastic linear regression model in view of the conditioning on the cr-field <r(Yt0_ j) and the rest of the systematic component being a direct extension of that of the linear regression model. This relationship will prove very important in the statistical analysis of the parameters of the dynamic linear regression model discussed in what follows.

In direct analogy to the linear and stochastic linear regression models we need to assume that X* as defined above is of full rank, i.e. rank(X*) = m{k + 1) for all the observable values of Y°_,=(ym, ym + l, ..., >V-i)'-

The probability model underlying (16) comes in the form of the product of the sequentially conditional normal distribution D(yJZ?_t, X,; i]/^, t>m. For the sample period t = m+ 1, ..., T the distribution of the sample is

The sampling model is specified to be a non-random sample from D*(y; ij/j). Equivalently, y = (ym + 1, vm + 2, • • ■, »)' can be viewed as a non-random sample sequentially drawn fromD(vI/Z(°_1,X,;^1).r = m-l-1,..., T, respectively. As argued above, asymptotically, the effect of the initial conditions summarised by m

i = i can be ignored: a strategy adopted in Section 23.2 for expositional purposes. The interested reader should consult Priestley (1981) for a readable discussion of how the initial conditions can be treated. Further discussion of these conditions is given in Section 23.3 below.

The dynamic linear regression model - specification (I) The statistical GM

m m y< = P'o*t+ I X #xt_; + ii„ t>m. (23.22)

[1] n, = E{yMYt°-1),XI0 = x(0) = /J'0xI+ X -i + fifr-ù- (23.231

[2] The (statistical) parameters of interest are

Was this article helpful?

## Post a comment