Multiplication of Matrices

Whereas a scalar can be used to multiply a matrix of any dimension, the multiplication of two matrices is contingent upon the satisfaction of a different dimensional requirement.

Suppose that, given two matrices A and we want to find the product AB. The eonformability condition for multiplication is that the column dimension of A (the "lead" matrix in the expression AH) must be equal to the row dimension of 8 {the "lag" matrix). For instance, if

the product A B then is defined, since A has ivvo columns and B has fwa rows precisely the same number.1 This can be checked at a glance by comparing the second number in the dimension indicator for A, which is (1 x 2)y with ths first number in the dimension indicator for B, (2x3). On the other hand, the reverse product BA is not defined in this ease, because B (now the lead matrix) has three columns while A (the lag matrix) has only one row; hence the eonformability condition is violated.

In general, if A is of dimension m x n and B is of dimension p / the matrix product AB will be defined if and only if n — p. Tf defined, moreover, the product matrix AB will have the dimension m x q the same number of rows as the lead matrix A and the same number of columns as the lag matrix B. For the matrices given in (4,5), AB will be 1 x 3.

It remains to define the exact procedure of multiplication. For this purpose, let us take the matrices A and B in (4.5) for illustration. Since the product AB is defined and is expected to be of dimension 1 x 3, we may write in general (using the symbol C rather than c for the row vector) that

Each element in the product matrix C, denoted by cin is defined as a sum of products, to be computed from the elements m the ith row of the lead matrix Ab and those in the yth column of the lag matrix B. To find cn, for instance, wc should take th^ first row in A (since / = 1) and the first column in B (since j ~ l)--as shown in the top panel of Fig. 4.1—and then pair the elements together sequentially, multiply out each pair, and take the sum of the resulting products, to get cn = a\]h\] +aub2\

Similarly, for C]2r we take the first row in A (since i = 1) and the second column in B (since j — 2), and calculate the indicated sum of products — m accordance with the lower panel of Fig. 4,1—as follows:

c\2 - &\]b]2 + Q\ihn By the same token, we should also have

r Fhe matrix A, being a row vector, would normally be denoted by a'. We use the symbol A here to stress the fact that the multiplication rule being explained applies to matrices in general, not only to the product of one vector and one matrix.

FIGURE 4.1 ForiM!

Firs l pair

Firs l pair

First pair

First pair

Example 7

It is the particular pairing requirement in this process which necessitates the matching of the column dimension of the lead matrix and the row dimension of the lag matrix before multiplication can be performed.

The multiplication procedure illustrated in Fig. 4.1 can also be described by using the concept of the inner product of two vectors. Given two vectors u and u with n elements each, say, (z<|f Ui7,.., un) and (v\, vj, ■.., vn), arranged either as two rows or as two columns or as one row and one column, their inner product, written as u • v (with a dot in the middle), is defined as

This is a sum of products of corresponding elements, and hence the inner product of two vectors is a scalar.

If, after a shopping trip, we arrange the quantities purchased of n goods as a row vector Q' = [Qi Qi Qrt], and list the prices of those goods in a price vector P' =

[Pi P2 ■ ■ Pn], then the inner product of these two vectors is

Q'. p* = Qt Pn + Q2P2 + •• 4- QnP(] ~ total purchase cost

Using this concept, we can describe the element c-l} in the product matrix C = AB simply as the inner product of the rth row of the lead matrix A and the yth column of the lag matrix B. By examining Fig. 4.1, we can easily verify the validity of this description.

The rule of multiplication just outlined applies with equal validity when the dimensions of A and B are other than those illustrated m Fig. 4,1; the only prerequisite is that the con-formability condition be met.

Example 8

Given

find AB. The product A3 is indeed defined because A has two columns and B has two rows. Their product matrix should be 3 x 1, a column vector:

" 1(5)+ 3(9) "

"32"

AB =

2(5)+ 8(9)

=

82

_4(5) + 0(9)_

Given

find A B. The same rule of multiplication now yields a very special product matrix:

9

7

i ~

10

10

10

"1

0

0

TB

+ 0-

3 10

=

0

1

0

12 10

+ 0 -

2 10

0

0

1

This last matrix—a square matrix with Is in its principal diagonal (the diagonal running from northwest to southeast) arid 0s everywhere else—exemplifies the important type of matrix known as the identity matrix. This will be further discussed in Section 4.5.

Examoie 10 U5 now t'ie ma*r'x ^ anc' vector x as defined in (4.4) and find Ax. The product - matrix is a 3 x 1 column vector:

"6

3

6X1 + 3^2 +

Ax =

1

4

-2

=

XI + 4X2 - 2*3

_4

-1

5_

Note; The product on the right is a column vector, its corpulent appearance notwithstanding! When we write Ax = d, therefore, we have

6x 1 + 3*2 + Jf3

"22"

+ 4X2 - 2*3

=

12

4xi - x2 + 5x3_

10

which, according to the definition of matrix equality, is equivalent to the statement of the entire equation system in (4.3).

Note that, to use the matrix notation Ax = d, it is necessary, because of the conforma-bility condition, to arrange the variables xj into a column vector, even though these variables are listed in a horizontal order in the original equation system.

Example 11

The simple national-income model in two endogenous variables Y and C,

can be rearranged into the standard format of (4.1) as follows:

Hence the coefficient matrix At the vector of variables x, and the vector of constants d are

1 1"

x _

'Ym

A —

" /o + Co"

(2x1)

a

Let U5 verify that this given system can be expressed by the equation Ax = dt By the rule of matrix multiplication, we have

i -r

"im+(-i)<cr

" Y-C "

b 1_

.c.

_ -b(Y) + 1(C) „

-£>V + C

Thus the matrix equation Ax - d would give us

- Y-C '

" lo + Co"

.-by + c.

a

Since matrix equality means the equality between corresponding elements, it is clear that the equation Ax = d does precisely represent the original equation system, as expressed in the (4.1) format

Was this article helpful?

0 0
Procrastination Killer

Procrastination Killer

Procrastination in probably the number one cause of failure in life and business. You Can Change Your Life Forever and Discover Success by Overcoming Procrastination. Learn how to defeat procrastination and transform your life into success.

Get My Free Ebook


Post a comment