## Info

k x 1

where p is a k-element column vector of the OLS estimators of the regression coefficients and where û is an n x 1 column vector of n residuals.

As in the two- and three-variable models, in the k-variable case the OLS estimators are obtained by minimizing

where Y U is the residual sum of squares (RSS). In matrix notation, this amounts to minimizing uu since

932 APPENDIX C: THE MATRIX APPROACH TO LINEAR REGRESSION MODEL

Therefore, uu = (y - X p) (y - X p) = y'y - 2 p'X'y + p'X'Xp

where use is made of the properties of the transpose of a matrix, namely, (Xp)' = p'X'; and since p'X'y is a scalar (a real number), it is equal to its transpose y'X p.

Equation (C.3.7) is the matrix representation of (C.3.4). In scalar notation, the method of OLS consists in so estimating pi, 02,..., pk that J2 is as small as possible. This is done by differentiating (C.3.4) partially with respect to pi, 02,..., 0k and setting the resulting expressions to zero. This process yields k simultaneous equations in k unknowns, the normal equations of the least-squares theory. As shown in Appendix CA, Section CA.1, these equations are as follows:

n\$i+x2i+x3i + ■■■+kYJXki = Y. Y X2i + x2i X2iX3i + ■■■ + &£ XliXki = YJ X2iYi

Pi y^ Xki+02 XkiX2i+03 XkiX3i +—+ Pk Xkd = J^XkiYi

In matrix form, Eq. (C.3.8) can be represented as n EX2i EX3i

J2 X2i J2 Xh J2 X2i X3i J2 X3i J2 X3i X2i J2 X3i

or, more compactly, as

EXki EX2iXki

EX3iXki 