Hi again madukar!!
In order to give you an understandable answer I looked for web pages
that bring the proof that you need. This solve the problem with thre
symbols.
I will assume that you know the definition of Companion matrix and the
following fact:
If p(x) = a0 + a1 x + .....+ a(n-1) x^(n-1) + x^n and C is the
respective companion matrix, then the characteristic polynomial of C
is p(x).
(This fact is easy to proof by calculation of det(xI-C)and induction).
To see this definition with a good notation please visit:
"Companion Matrix -- from MathWorld":
http://mathworld.wolfram.com/CompanionMatrix.html
Now you only have a nxn matrix A (that is the companion matrix of
P(x)) and its characteristic polynomial such is P(X).
Definition: a linear transformation T :V --> V is diagonalizable if T
can be represented by a diagonal matrix D.
Consider the fact that Eigenvectors {V1, ... , Vj} belonging to
distinct eigenvalues X1, ... ,Xj are linearly independent.
See Theorem 4.1 at "University of Adelaide - Linear Algebra -
EIGENVECTORS AND EIGENVALUES - DEFINITIONS AND PROPERTIES":
http://www.maths.adelaide.edu.au/pure/pscott/linear_algebra/lapf/41.html
Then if the nxn matrix A has n distinct eigenvalues, the respectives
eigenvectors (that are all linearly independent) {V1,...,Vn} form a
basis of R^n. Now if A is the matrix of a linear transformation T in
the canonic basis, then T(Vi) = Xi.Vi for each i. So with respect to
this basis, the matrix of T is
X1 0 ... 0
D = [Xi.d(ij)] = 0 X2 ... 0
.............
.............
0 0 ... Xn
where d(ij) is the Kronecker delta
d(ij) = 0 if i is different of j or d(ij) = 1 if i=j.
For a change of basis of V will replace the matrix A of T by
D = (P)^-1.A.P where P is the inversible matrix of the change of
basis.
This proof the second part of the proposition.
See for more reference Theorem 4.7 at "University of Adelaide - Linear
Algebra - EIGENVECTORS AND EIGENVALUES - DIAGONALIZABILITY":
http://www.maths.adelaide.edu.au/pure/pscott/linear_algebra/lapf/43.html
Now supose that the matrix A is diagonalizable, then by definition
exist B inversible and D diagonal such that:
D = (B)^-1.A.B = [Xi.d(ij)] where d(ij) is the Kronecker delta.
We will use some properties of the determinants:
The characteristic polynomial of D is
P(x) = det(x.I-D) = (x-X1). ... .(x-Xn) (eq.1)
then P(x) = det(x.I - (B)^-1.A.B) = det(x.((B)^-1.B) - (B)^-1.A.B) =
= det((B)^-1. (x.I-A).B) (distributibity of the product of
matrix)
= det((B)^-1). det(x.I-A). det(B) =
= det(x.I-A) (eq.2)
Checking eq.1 and eq.2 we have:
P(x) = det(x.I-A) = (x-X1). (x-X2). ... .(x-Xn).
X1,...,Xn are the n roots of this polynomial, then satisfied:
det(Xi.I-A) = 0 i = 1 to n.
Then the matrix (Xi.I-A) is singular (non inversible), for all i= 1 to
n,
then exist for each i a vector Vi different to zero that satisfied:
(Xi.I-A).Vi = 0 ,
then 0 = Xi.I.Vi - A.Vi = Xi.Vi - A.Vi ,
finally we obtain A.Vi = Xi.Vi i = 1 to n.
Then by definition Xi is an eigenvalue of A for all i 0 1 to n.
Thats complete the proof of the proposition.
You can find more references at "The University of Queensland"
"Eigenvalues and Eigenvectors":
http://www.maths.uq.edu.au/courses/MATH2000/LinAlg/index.php?Eigen
"Properties of Eigenvalues and Eigenvectors":
http://www.maths.uq.edu.au/courses/MATH2000/LinAlg/index.php?Eigen1
"Diagonalization of Matrices with Eigenvalues and Eigenvectors":
http://www.maths.uq.edu.au/courses/MATH2000/LinAlg/index.php?Diag
Search strategy:
My own knowledge and for online references I used the following
keywords:
companion matrix
eigenvalues properties
Search engine:
Google
Search results pages:
://www.google.com/search?q=companion+matrix&hl=en&lr=&ie=UTF-8&start=0&sa=N
://www.google.com/search?q=eigenvalues%20properties
I hope this helps you, if you need some more help with this topic,
please post a request of clarification and I will be pleased to
respond to it.
Best Regards.
livioflores-ga |