Info

then, since [6 8 10] = 2[3 4 5], we have v'3 = 2v\ = 2v\ + 0v'2. Thus the third row is expressible as a linear combination of the first two, and the rows are not linearly independent. Alternatively, we may write the above equation as

2v\ + 0v'2 - v'3 = [6 8 10] + [0 0 0] - [6 8 10] = [0 0 0]

Inasmuch as the set of scalars that led to the zero vector of (5.4) is not A:, = 0 for all /, it follows that the rows are linearly dependent.

Unlike the squareness condition, the linear-independence condition cannot normally be ascertained at a glance. Thus a method of testing linear independence among rows (or columns) needs to be developed. Before we concern ourselves with that task, however, it would strengthen our motivation first to have an intuitive understanding of why the linear-independence condition is heaped together with the squareness condition at all. From the discussion of counting equations and unknowns in Sec. 3.4, we recall the general conclusion that, for a system of equations to possess a unique solutioxi^ it is not sufficienLlQ_have the same number of equations as unknowns. In addition, the equations must be consistent with and functionally independent (meaning, in the present context of ljnear systems,_/i¬ętw/>' independent) of one another. There is a fairly obvious tie-in between the "same number of equations as unknowns" criterion and the squareness (same number of rows and columns) of the coefficient matrix. What the " linear, independence among the rows" requirement does is to preclude the inconsistency and the linear dependence among the equations as well. Taken together, therefore, the dual requirement of squareness and row independence in the coefficient matrix is tantamount to the conditions for the existence of a unique solution enunciated in Sec. 3.4.

Let us illustrate how the linear dependence among the rows of the coefficient matrix can cause inconsistency or linear dependence among the equations themselves. Let the equation system Ax = d take the form

0 0

Post a comment