Let's summarize the problem.
We are given matrices W and V such that for some matrix U, W = UV.
(1) The solution U is unique if and only if the rows of V are linearly
independent. That is, supposing V has m rows, then if and only if
rank(V) = m.
To prove this, consider any pair of solutions U and U'. Then:
(U - U')V = UV - U'V = W - W = 0
Thus any row of U - U' times V gives a zero row. If U and U' are
unequal, then some row of U - U' is nonzero, and that gives a
corresponding linear dependence relation on the rows of V. On the
other hand if there were a linear dependence relation on the rows of
V, that would give coefficients to use as a nonzero row z such that zV
= 0, and hence by adding u to any row of U (such as the first row), we
could construct a solution U' different from U.
(2) By the same argument, if V has m rows and rank(V) < m, then there
are infinitely many solutions. In fact given one solution U such that
W = UV, we can construct the set of all solutions as U + {Y | YV = 0},
where 0 here denotes the zero matrix of the same dimensions as W. The
rows of such matrices Y are row vectors z s.t. zV = 0, i.e. the "left"
nullspace of V. These rows z form a subspace of dimension k = m -
rank(V), so the dimension of {Y | YV = 0} is nk when W (and thus U)
has n rows.
(3) So if rank(V) < m, the solution U is never unique.
regards, mathtalk |
Clarification of Answer by
mathtalk-ga
on
25 Feb 2003 11:19 PST
Hi, elgoog_elgoog-ga:
A very concise review of the properties of linear independence and
spanning set as applied to rows and columns of matrices is given here:
[Vector Spaces]
http://www.public.asu.edu/~sergei/classes/mat242f99/LinAlg3.doc
In particular the last theorem given there is that row rank and column
rank (the dimensions of the row space and column space) of a matrix
are equal. A last note mentions that if matrix A has n columns:
rank(A) + dim Null(A) = n
i.e. the rank of a matrix plus its "nullity" (dimension of its null
space, the subspace of solutions to Ax = 0) gives the number of
columns in A.
One way to apply this result directly to your problem is by taking a
transpose. Note that the null space of A is:
Null(A) = {x | Ax = 0}
where in your problem the subspace of interest is {z | zV = 0}. One
normally places the "unknown" as a column to the right of the
"coefficient" matrix in a linear system, but in the problem here the
placement is reversed.
Let's now use ' to denote transpose. The order of terms in matrix
multiplication is reversed by taking transposes, cf.
[Matrix Multiplication]
(scroll to middle of page)
http://ccrma-www.stanford.edu/~jos/mdft/Matrix_Multiplication.html
Thus, where W = UV, after taking transposes on both sides:
V'U' = W'
We can thus solve for each column x of U' by using the corresponding
column b of W':
V'x = b
The solution is known to exist (see our earlier discussion); the
question is that of uniqueness. But uniqueness is related by
"superposition" to nontrival solutions of the homogeneous problem:
V'x = 0
since distinct solutions x,y such that V'x = V'y = b would give:
V'(x - y) = b - b = 0
The system V'x = 0 has only the trivial solution x = 0 if and only if
its null space Null(A) is the trivial subspace {0}, of dimension 0.
We saw by the reference cited above that:
rank(V') + dim Null(V') = m
where m is the number of rows of V (number of columns of V'). So to
complete the circle of ideas, dim Null(V') = 0 if and only if rank(V)
= rank(V') = m.
For a good textbook discussion of these results, you might look for a
copy of:
[Howard Anton's Elementary Linear Algebra]
http://www.amazon.com/exec/obidos/ASIN/0471170550/002-0009224-4489603
regards, mathtalk
|