Google Answers Logo
View Question
 
Q: To :livioflores-ga ( Answered 5 out of 5 stars,   3 Comments )
Question  
Subject: To :livioflores-ga
Category: Science > Math
Asked by: joannehuang-ga
List Price: $65.00
Posted: 30 Apr 2003 06:04 PDT
Expires: 30 May 2003 06:04 PDT
Question ID: 197411
Hi livioflores-ga : 
I have question about linear algebra:
Let dim(V)=n=2k+1 and let T belong to L(V) satisfy T^n=0 but T^n-1 not
equal to 0. Prove that there is no operators S belong to L(V) for
which S^2=T.

I still have 2-3 questions about linear algebar but I couldn't type it
in here. May I ask you question via e-mail? If you could, I can add
those to tips.
I reall thanks for your help.

Request for Question Clarification by livioflores-ga on 30 Apr 2003 09:42 PDT
Hi joannehuang!!

I just see your question and I will work on it. In regard to the other
questions as mathtalk says I cannot give my e-mail to you, I think
that the other questions needs graphs or a more improved way to be
posted, I suggest you to open a free web page at Yahoo! GeoCities Free
 for example and upload to it the graphs or the text or the file (Word
or Excel or whatever).
Go to http://geocities.yahoo.com/ps/learn2/HowItWorks4_Free.html

It is easy to do, but if you need help on this ask me.

Regards.
livioflores-ga

Clarification of Question by joannehuang-ga on 30 Apr 2003 19:48 PDT
Hi liviflores-ga:
According to your information, I created a webpage. You can link to:
http://www.geocities.com/appliedlinear2000/1.GIF
I really thanks for your help!!
If you have trouble to link to this page, please let me know.

Clarification of Question by joannehuang-ga on 02 May 2003 11:54 PDT
To: livioflores-ga:
Thanks for working on those questions.
I think the link to the questions need to change to:

http://www.geocities.com/appliedlinear2000/index.html

Thanks

Request for Question Clarification by livioflores-ga on 03 May 2003 09:56 PDT
Hi joannehuang-ga!!

Excuse the delay, but I had problems with my computer. Also I must
tell you that I canīt answer your question, I think that I close, but
the topic is new for me. I suggest you to cancel this question and
repost it in order to receive an answer, may be mathtalk or another
researcher can do the task.
I think the solution is related to the Caley-Hamilton theorem and sure
is related to the minimal polynomial of S as mathtalk suggest.
I also recommend to you to post 3 more question for the
http://www.geocities.com/appliedlinear2000/1.GIF
problems. To post these question the better way is in the form:
Question 1:
"Can you solve the problem 1 of the page:
http://www.geocities.com/appliedlinear2000/1.GIF "

Question 2:
"Can you solve the problem 2 of the page:
http://www.geocities.com/appliedlinear2000/1.GIF "

etc.

Thank you, and please excuse again for the delay and for cannot solve
the problems.
Regards.
livioflores-ga

Request for Question Clarification by mathtalk-ga on 03 May 2003 10:39 PDT
Hi, joannehuang-ga:

To avoid in part another $0.50 posting charge, I'd be happy to go
ahead and answer this question (proving T is not of the form S^2). 
Let me know if you'd like for me to do that.  I'm leaving the question
unlocked in case you'd prefer to expire it or reduce the list price.

regards, mathtalk-ga

Clarification of Question by joannehuang-ga on 03 May 2003 13:03 PDT
To:Mathtalk-ga:
I'm glad you can help me to solve the question. Another thing is:

Can you solve the problems of the page
http://www.geocities.com/appliedlinear2000/index.html
If you can, I will add to tips. If you have problem to link to the
page, please let me know.
Thanks
P.S Thanks to livioflores-ga

Request for Question Clarification by mathtalk-ga on 03 May 2003 13:37 PDT
Hi, joannehuang-ga:

There are three problems on that page, each with more than one part. 
It seems very difficult for me to offer an opinion about doing these
problems as part of the one you've already posted.

Please read these guidelines from Google Answers about pricing
questions:
 
http://answers.google.com/answers/pricing.html 

Multi-part questions are normally priced at $50 and above.
 
Of course the best guide is for you to price the question based on its
value to you.  If you like, reduce the price offered for the current
question by $0.50 and "use" that to pay for posting a separately
priced question on the others.

regards, mathtalk-ga

Clarification of Question by joannehuang-ga on 03 May 2003 16:38 PDT
To:Mathtalk-ga: 
If I increase the price to $65 , would you answer those question or
you prefer I pose again to ask other researchs?
Thanks

Request for Question Clarification by mathtalk-ga on 03 May 2003 17:22 PDT
If you would prefer to have me answer all of them, then go ahead and
raise the price.  One advantage to posting the problems separately is
that a number of researchers can tackle them.  However I'm sure I
could do all of the problems in a reasonable period of time.

regards, mathtalk-ga

Request for Question Clarification by mathtalk-ga on 03 May 2003 20:30 PDT
Hi, joannehuang-ga:

Below I have transcribed your "scanned" questions into an ASCII
facsimile.  Please review my choice of notations and ask me about any
usage which seems unclear or mistaken.

I have a few points that I'd like you to confirm or clarify.  In the
first problem I have interpreted the vector space P_2(R) to mean real
polynomials of degree at most 2.  Please compare this interpretation
with what you know from the surrounding context from which these
problems were taken.

In the second problem there are two related issues that must be pinned
down.  For one thing, the most likely interpretation for T*T, in my
opinion, is the composition of linear transformation T with its
adjoint; the result is then self-adjoint and it makes sense to speak
then of sqrt( T*T ).  The other issue is the "order" of composition,
which arises both in connection with composing T with its adjoint and
also in connection with composing sqrt( T*T ) with isometry S.  On my
side of the Atlantic pond (American), we write:

S ° sqrt( T*T )

to mean first "do" sqrt( T*T ) and then apply S to its result. 
However, British practice (and perhaps that of Europeans generally,
I'm not sure), is the opposite; it would mean first "do" S and then
apply sqrt( T*T ) to its result.

The "order" of composition is then really implicit in defining T*T (as
a composition of T with its adjoint) and in the expression S ° sqrt(
T*T ).  I ask that you give your opinion on what order was intended by
the author of these problems.

Fortunately (after such dsylexic puzzlement on problem 2) I have no
questions about the interpretation of problem 3 (see below).  It is
straightforward.

These are in addition to the original question posted above:

1.  Let V = P_2(R), polynomials of degree at most 2, be a real inner
produce space with <f(x),g(x)> = INTEGRAL f(x)g(x) dx OVER [0,1].

Let U be the subspace of V defined by U = { f(x) in V : f(1) = 0 }.

(a)  Construct a basis B_1 for U and prove that is indeed a basis.

(b)  Use the Gram-Schmidt method on the basis B_1 to construct an
orthonormal basis B_2 for U.

(c)  Determine an f(x) in U such that:

INTEGRAL |1 + x - f(x)|^2 dx

is as small as possible.

2.  Let <,> be the usual (Euclidean) inner product on R^3, and let
T be a linear transformation on R^3 defined by:

T(x,y,z) = (y,2z,0)

(a)  Compute sqrt( T*T ).

(b)  Compute orthonormal bases for each of:

V_1 = range(sqrt( T*T )),

V_2 = range(sqrt( T*T ))^perp = orthogonal complement of V_1,

V_3 = range(T), and

V_4 = range(T)^perp = orthogonal complement of V_3.

(c)  Compute an isometry S for which T = S ° sqrt( T*T ).

(d)  Compute the singular value decomposition of T.

            / 0 0 1 \
3.  Let A = | 1 0 0 | .
            \ 0 1 0 /

(a)  Determine the Jordan form for A over the reals R.

(b)  Determine the Jordan form for A over the complex numbers C.

Final note:  Let me know if you would like me to post proofs as
quickly as I have them worked out (as I already have for the original
problem).  Otherwise I shall wait until all four answers have been
worked out before posting any answer.

regards, mathtalk-ga

Clarification of Question by joannehuang-ga on 03 May 2003 22:50 PDT
mathtalk-ga:
first, about the The "order" of composition, you can "do" sqrt( T*T )
and then apply S to its result.
second, you can wait until all four answers have been worked out.
That's Ok with me.
Thanks
Jaonnehuang-ga
Answer  
Subject: Re: To :livioflores-ga
Answered By: mathtalk-ga on 05 May 2003 06:38 PDT
Rated:5 out of 5 stars
 
Hi, joannehuang-ga:

I've numbered the problems 0 to 3 and interspersed the solutions
among the parts of the problem statements.  In problem 2 I've used
both operator expressions and matrix representations to cover the
same material, thinking that the combination of both approaches may
be helpful to your understanding.

regards, mathtalk-ga

0.  Let dim(V)=n=2k+1, and let T:V -> V be a linear tranformation
satisfing T^n=0 but not T^(n-1)=0.

Prove that there is no linear operator S for which S^2=T.

Proof:  First note that we must assume k > 0, for if n = 1, then
there exists a trivial counterexample T = 0. (Recall T^0 = I by 
convention.)

Given k > 0, the assumptions of the problem amount to saying that
f(x) = x^n is the minimal polynomial for T, since f(T) = 0 but no
proper divisor of f(x) satisfies that condition.

If there were S such that S^2 = T, then the minimal polynomial for
S would have to divide x^(2n), since S^(2n) = T^n = 0.  Now the 
characteristic polynomial of S has degree n, so the degree of the
minimal polynomial is at most n.  Hence S^n = 0 because the minimal
polynomial for S must be x^m with m <= n.

Since S^n = 0, also S^(n+1) = S^2(k+1) = T^(k+1) = 0.  But:

k+1 < 2k+1 = n (because k > 0)

would contradict that x^n is the minimal polynomial of T.  Therefore
no such S can exist.  QED

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *


1.  Let V = P_2(R), polynomials of degree at most 2, be a real inner
produce space with <f(x),g(x)> = INTEGRAL f(x)g(x) dx OVER [0,1].

Let U be the subspace of V defined by U = { f(x) in V : f(1) = 0 }.


(a)  Construct a basis B_1 for U and prove that is indeed a basis.

Answer:  Consider the standard basis {1,x,x^2} for V.  By inspection
each of these basis functions is 1 at x=1, so:

B_1 = {x - 1,x^2 - 1} 

contains two functions that are 0 at x=1, so B_1 is a subset of U.

We claim B_1 is a basis for subspace U.

Linear independence of B_1:  Suppose there exist scalars a,b s.t.

a(x - 1) + b(x^2 - 1) = 0 in P_2(R)

That is bx^2 + ax - (a+b) = 0 identically as polynomials.  Comparing
coefficients of x and x^2 on both sides shows us that a = b = 0.  So
the linear independence of B_1 is proven.

Spanning of U by B_1:  Let f(x) be any polynomial of degree at most
two, and suppose f(x) belongs to U.  That is:

f(x) = ax^2 + bx + c for some real coefficients a,b,c

f(1) = a + b + c = 0

c = -(a + b)

Thus f(x) = ax^2 + bx - (a + b) = a(x^2 - 1) + b(x - 1) shows us
that B_1 is a spanning set of U.

Since a basis is by definition a linearly independent spanning set,
we have shown B_1 is a basis for U.


(b)  Use the Gram-Schmidt method on the basis B_1 to construct an
orthonormal basis B_2 for U.

Answer:  The first step of Gram-Schmidt is to "normalize" the first
element of our basis.  Here the inner product is the integral, so:

<x - 1,x - 1> = INTEGRAL (x-1)^2 dx OVER [0,1]

              = INTEGRAL u^2 du OVER [0,1]  by substitution u = 1-x
              
              = 1/3

so the unit length multiple of x - 1 is sqrt(3)(x - 1).

The second step of Gram-Schmidt is to find the residue of the next
basis element after subtracting its projection onto the first element
and then normalizing this residue to have unit length.  A well-known
trick for avoiding "square roots" in the computation is by taking the
residue like this:

(x^2 - 1) - (<x^2 - 1,x - 1>/<x - 1,x - 1>) (x - 1)

We already found the denominator of that scalar coefficient above, so
it remains only to find its numerator:

<x^2 - 1,x - 1> = INTEGRAL (x^2 - 1)(x - 1) dx OVER [0,1]

                = INTEGRAL (x + 1)(x - 1)^2 dx OVER [0,1]
                
                = INTEGRAL (2 - u) u^2 du OVER [0,1]  as before
                
                = (2/3) - (1/4) = 5/12

Therefore we have as a residue:

(x^2 - 1) - (5/4) (x - 1) = x^2 - (5/4)x + (1/4)

To normalize this polynomial we require its length (squared):

<x^2 - (5/4)x + 1/4,x^2 - (5/4)x + 1/4>

        = INTEGRAL (1/16)(4x^2 - 5x + 1)^2 dx OVER [0,1]
        
        = (1/16) INTEGRAL (4x - 1)^2 (x - 1)^2 dx OVER [0,1]
        
        = (1/16) INTEGRAL (3 - 4u)^2 u^2 du OVER [0,1]
        
        = (1/16) INTEGRAL 9u^2 - 24u^3 + 16u^4 du OVER [0,1]
        
        = (1/16) [ 3 - 6 + (16/5) ] = (1/16)(1/5)
        
After dividing by the appropriate square root, the normalized
residue becomes:

sqrt(5) (4x^2 - 5x + 1)

To summarize the output of Gram-Schmidt gives us this:

B_2 = { sqrt(3) (x - 1), sqrt(5) (4x^2 - 5x + 1) }


(c)  Determine an f(x) in U such that:

INTEGRAL |1 + x - f(x)|^2 dx

is as small as possible.

Answer:  The f(x) in U which minimizes this "norm squared" is just
the projection of "vector" 1 + x onto the subspace U of V.  Since
we have just computed an orthonormal basis B_2 for U, a natural way
to do this is to simply "project" 1 + x onto each of the two basis
elements in B_2 = {g(x),h(x)}:

f(x) = <1 + x, g(x)> g(x) + <1 + x, h(x)> h(x)

where g(x) = sqrt(3) (x - 1), h(x) = sqrt(5) (4x^2 - 5x + 1).

Once the appropriate integrals are evaluated, we find:

f(x) = -2(x - 1) - (5/3)(4x^2 - 5x + 1)

     = -(20/3)x^2 + (19/3)x + (1/3)
     
As a check on this answer I verified that f(x) - (1 + x) is indeed
orthogonal to both basis elements x - 1 and x^2 - 1 of B_1.

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *


Note:  In this problem the expression T*T represents T' ° T, or the
composition of T' (the adjoint of T) with T (first do T, then T').

2.  Let <,> be the usual (Euclidean) inner product on R^3, and let
T be a linear transformation on R^3 defined by:

T(x,y,z) = (y,2z,0)


(a)  Compute sqrt( T*T ).

Answer:  Let's begin by determining the adjoint T', which is the
unique operator on R^3 such that:

<T(x,y,z),(a,b,c)> = <(x,y,z),T'(a,b,c)> 

for all vectors (x,y,z), (a,b,c) in R^3.  By comparison of terms on
both sides, it is found that:

<T(x,y,z),(a,b,c)> = ay + 2bz = <(x,y,z),(0,a,2b)>

T'(a,b,c) = (0,a,2b)

Therefore the compostion T*T is like this:

T*T(x,y,z) = T'(T(x,y,z)) = T'(y,2z,0) = (0,y,4z)

The same result can be had by working with a matrix representation
of T, say with respect to the standard ordered basis of R^3.  In
that case the adjoint T' corresponds to the matrix transpose:

       / 0 1 0 \              / 0 0 0 \
T |--> | 0 0 2 |  and T' |--> | 1 0 0 | 
       \ 0 0 0 /              \ 0 2 0 /

so that T*T = T' ° T corresponds to the product of their matrix
representations:

         / 0 0 0 \
T*T |--> | 0 1 0 |
         \ 0 0 4 /

In any case here the computation of sqrt( T*T ) can be done by
inspection:

sqrt( T*T )(x,y,z) = (0,y,2z)

so that sqrt( T*T ) "squared" gives T*T as above.  Note that the
matrix representation of T*T is not just symmetric but already 
diagonal if done with respect to the standard basis:

         / 0 0 0 \                       / 0 0 0 \
T*T |--> | 0 1 0 | , so sqrt( T*T ) |--> | 0 1 0 |
         \ 0 0 4 /                       \ 0 0 2 /

and sqrt( T*T ) amounts to taking the principle square root of each
diagonal entry of the matrix representation of T*T in this case.

Note that the matrix representations of both T*T and sqrt( T*T ) are
nonnegative diagonal matrices.  This corresponds to the property of
these operators being positive semi-definite and symmetric.


(b)  Compute orthonormal bases for each of:

V_1 = range(sqrt( T*T )),

V_2 = range(sqrt( T*T ))^perp = orthogonal complement of V_1,

V_3 = range(T), and

V_4 = range(T)^perp = orthogonal complement of V_3.

Answers:  Note that the range of sqrt( T*T ), like the range of T*T,
consists of vectors whose first component is zero.  So we can just
use a subset of the standard basis of R^3 omitting the first vector:

B_1 = orthonormal basis for V_1 = {(0,1,0),(0,0,1)}

and for the orthogonal complement:

B_2 = orthonormal basis for V_2 = {(1,0,0)}

The range of T on the other hand consists of vectors whose third
component is zero. So:

B_3 = orthonormal basis for V_3 = {(1,0,0),(0,1,0)}

and for the orthogonal complement:

B_4 = orthonormal basis for V_4 = {(0,0,1)}


(c)  Compute an isometry S for which T = S ° sqrt( T*T ).

Answer:  From a comparison of the operator definitions:

T(x,y,z) = (y,2z,0)

sqrt( T*T )(x,y,z) = (0,y,2z)

we see that one possible choice for S would be coordinate rotation:

S(x,y,z) = (y,z,x)

which also happens to be orientation preserving.  Verify:

S ° sqrt( T*T )(x,y,z) = S(0,y,2z) = (y,2z,0) = T(x,y,z)

In this case the inverse of S, rotating the coordinates back in the
opposite direction, is also S', the adjoint of S.  In terms of their
matrix representations wrt to the standard order basis of R^3:

       / 0 1 0 \               / 0 0 1 \
S |--> | 0 0 1 |  and  S' |--> | 1 0 0 |
       \ 1 0 0 /               \ 0 1 0 /

(d)  Compute the singular value decomposition of T.

Answer:  Since sqrt( T*T ) is already diagonalized, one simple answer
might be to write, using what we have already discussed above:

T = S ° sqrt( T*T ) ° I

However it is customary when writing the singular value decompostion
to put diagonal entries (singular values) in decreasing order, i.e.:

/ 2 0 0 \               / 0 0 0 \
| 0 1 0 |  rather than  | 0 1 0 |
\ 0 0 0 /               \ 0 0 2 /

The distinction is difficult to make when one is using only the
operator notation to discuss the decomposition, but in terms of the
matrix representations we would multiply on the left and right by:

    / 0 0 1 \
U = | 0 1 0 |
    \ 1 0 0 /

which corresponds to the (orientation reversing) isometry that swaps
first and third coordinates.  U is symmetric and its own inverse.

The conventional ordering of singular values would then be given by:

/ 0 1 0 \      / 0 1 0 \     / 2 0 0 \
| 0 0 2 |  =   | 0 0 1 |  U  | 0 1 0 |  U
\ 0 0 0 /      \ 1 0 0 /     \ 0 0 0 /

               / 0 1 0 \  / 2 0 0 \
           =   | 1 0 0 |  | 0 1 0 |  U
               \ 0 0 1 /  \ 0 0 0 /

In other words the standard basis matrix representation for T can be
written as V D U, where U,V are orthogonal matrices:

    / 0 1 0 \
V = | 1 0 0 |
    \ 0 0 1 /

and diagonal matrix D puts the singular values in descending order.


* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

            / 0 0 1 \
3.  Let A = | 1 0 0 | .
            \ 0 1 0 /

(a)  Determine the Jordan form for A over the reals R.

Answer:  For details about the Jordan form over the reals, see here:

http://www.numbertheory.org/courses/MP274/realjord.pdf

In this case the permutation matrix A obviously satisfies A^3 = I 
because the permutation of rows induced by A is a cycle of order 3.
One can also show that x^3 - 1 is A's characteristic polynomial:

det(xI - A) = x^3 - 1

Hence A has one real eigenvalue (characteristic root) x = 1, and a
pair of complex conjugate roots from the remaining factor, namely:

x^3 - 1 = (x - 1)(x^2 + x + 1)

x^2 + x + 1 = 0

x = -(1/2) + i sqrt(3)/2, -(1/2) - i sqrt(3)/2

A corresponding real Jordan form for A is then:


/ 1   0   0 \
| 0   a   b |  where a = -(1/2) and b = sqrt(3)/2.
\ 0  -b   a /


(b)  Determine the Jordan form for A over the complex numbers C.

As the analysis above shows, A has three distinct eigenvalues over
the complex numbers, so the Jordan form of A in that context is:

/ 1  0  0 \
| 0  z  0 |   where z = -(1/2) + i sqrt(3)/2,
\ 0  0  z'/

and  z' = -(1/2) - i sqrt(3)/2 is the complex conjugate of z.


Other Links of Interest:

If you want to do symbolic integration, Wolfram supplies an online
calculator for indefinite integrals here:

http://www.integrals.com/


Search Strategy

Keywords:  "real Jordan form"
://www.google.com/search?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=%22real+Jordan+form%22&btnG=Google+Search

Request for Answer Clarification by joannehuang-ga on 06 May 2003 20:27 PDT
To Mathtalk-ga:
Thanks for your answer. That's really help.
I have one more question, if you would like to answer it ,I will add
$15 to the tips. But if you prefer not, I will do the rating right
now.

 In each of the following parts of this problem a matrix A is given.
For that A let T be the element of L(V) for which [T]s = A where S is
the standard ordered basis of R^4. In each case give the following:

(i)	minimal and characteristic polynomials; (ii)all the eigenvalues,
and for each eigenvalue X (iii) all the eigenvectors associated with
X, and then (iv) all the generalized eigenvectors associated with the
eigenvalue X

(a)
/ 2 1 0 0 \
| 0 2 0 0 |
| 0 0 2 0 |
\ 0 0 0 2 / 

(b)
/ 2 1 0 0 \
| 0 2 0 0 |
| 0 0 2 1 |
\ 0 0 0 2 /

(c)
/ 2 1 0 0 \
| 0 2 1 0 |
| 0 0 2 1 |
\ 0 0 0 2 /

(d)
/ 2 1 0 0 \
| 0 2 0 0 |
| 0 0 3 0 |
\ 0 0 0 4 / 
(e)
/ 2 0 0 0 \
| 0 2 0 0 |
| 0 0 3 0 |
\ 0 0 0 4 / 



Thanks a lot!

Clarification of Answer by mathtalk-ga on 06 May 2003 21:50 PDT
Hi, joannehuang-ga:

I will use row vectors (x,y,z,w) to denote elements of R^4, even
though the representation of T by matrix A with respect to a 
standard basis is by convention a multiplication by A on the left
of a vector Au = v.  To reconcile this inconsistency (since u,v
are then column vectors rather than row vectors), I recall the use
of ' to denote transposition (thus turning rows into columns).

Because the matrices A are all essentially in Jordan canonical form,
a basis of the eigenvectors and generalized eigenvectors can be
chosen to be "standard" basis vectors.  The question seems to be a
little more than simply exhibiting a basis of (generalized) eigen-
vectors, as it asks for "all the eigenvectors" (resp. all the
generalized eigenvectors) associated with an eigenvalue.  However
for convenience we will define these "standard" basis vectors:

e_1 = (1,0,0,0)'

e_2 = (0,1,0,0)'

e_3 = (0,0,1,0)'

e_4 = (0,0,0,1)'

 
(a) 

/ 2 1 0 0 \  
| 0 2 0 0 | 
| 0 0 2 0 | 
\ 0 0 0 2 /  

The minimal polynomial is (x - 2)^2; the characteristic polynomial
is (x - 2)^4.  The only eigenvalue is 2.  The (true) eigenvectors
associated with eigenvalue 2 are the non-zero linear combinations
of e_1, e_3, and e_4, i.e. (a,0,b,c)' where not all a,b,c are zero.

The generalized eigenvalues are the non-zero vectors u such that:

(A - 2I)^k u = 0 for some k > 1

Here e_2 is a particular generalized eigenvector, but more generally
any vector in R^4 outside the span of { e_1, e_3, e_4 } is in this
case a generalized eigenvector, i.e. any vector (a,b,c,d)' where b
is not zero.
 
(b) 

/ 2 1 0 0 \ 
| 0 2 0 0 | 
| 0 0 2 1 | 
\ 0 0 0 2 / 

The minimal polynomial is again (x - 2)^2; the characteristic
polynomial (x - 2)^4.  Again the only eigenvalue is 2.

The eigenvectors are the non-zero vectors spanned by { e_1, e_3 },
i.e. vectors (a,0,b,0)' where not both a,b are zero.  Generalized
eigenvectors are non-zero vectors outside the span of { e_1, e_3 },
i.e. vectors (a,b,c,d)' where not both b,d are zero.
 
(c) 

/ 2 1 0 0 \ 
| 0 2 1 0 | 
| 0 0 2 1 | 
\ 0 0 0 2 / 

The minimal polynomial and characteristic polynomial are both
(x - 2)^4.  Again the only eigenvalue is 2.

The only eigenvectors are non-zero multiples of e_1, i.e. vectors
(a,0,0,0)' where a is non-zero.  All the other non-zero vectors are
generalized eigenvectors, i.e. (a,b,c,d)' where not all b,c,d are
zero.
 
(d) 

/ 2 1 0 0 \ 
| 0 2 0 0 | 
| 0 0 3 0 | 
\ 0 0 0 4 / 

The minimal polynomial and characteristic polynomial are both
(x - 2)^2 (x - 3) (x - 4).  There are three eigenvalues 2, 3, 4.

The eigenvalue 2 has as eigenvectors all non-zero multiples of
e_1, i.e. vectors (a,0,0,0)' where a is not zero.  It has for
generalized eigenvectors the rest of the non-zero vectors spanned
by { e_1, e_2 }, i.e. vectors (a,b,0,0)' where b is not zero.

The eigenvalue 3 has as eigenvectors all non-zero multiples of
e_3, i.e. vectors (0,0,a,0)' where a is not zero.  There are no
additional generalized eigenvectors associated with eigenvalue 3.

The eigenvalue 4 has as eigenvectors all non-zero multiples of
e_4, i.e. vectors (0,0,0,a)' where a is not zero.  There are no
additional generalized eigenvectors associated with eigenvalue 4.

(e) 

/ 2 0 0 0 \ 
| 0 2 0 0 | 
| 0 0 3 0 | 
\ 0 0 0 4 /  
 
The minimal polynomial is (x - 2)(x - 3)(x - 4).  The characteristic
polynomial is (x - 2)^2 (x - 3)(x - 4).  Again there are three
eigenvalues 2,3,4.

The eigenvalue 2 has a two-dimensional "space" of eigenvectors, the
non-zero linear combinations of e_1 and e_2, i.e. vectors (a,b,0,0)'
where not both a,b are zero.  There are no additional generalized
eigenvectors associated with eigenvalue 2.

The eigenvalue 3 has just a one-dimensional "space" of eigenvectors,
the non-zero multiples of e_3, i.e. vectors (0,0,a,0)' where a is
not zero.  There are no additional generalized eigenvectors for the
eigenvalue 3.

The eigenvalue 4 has just a one-dimensional "space" of eigenvectors,
the non-zero multiples of e_4, i.e. vectors (0,0,0,a)' where a is 
not zero.  There are no additional generalized eigenvectors for the
eigenvalue 4.
joannehuang-ga rated this answer:5 out of 5 stars and gave an additional tip of: $18.00
That's really helpful and fast. I learn a lot .

Comments  
Subject: Re: To :livioflores-ga
From: mathtalk-ga on 30 Apr 2003 06:36 PDT
 
Hi, joannehuang-ga:

I know you have posted several questions before in Science > Math, and
I appreciate your participation.  You are certainly welcome to post
questions for a specific researcher, as you have here for
livioflores-ga, by including the GA Researchers name in the subject
title.

However GA Researchers are asked not to respond to questions that
contain or ask for personal information (such as email addresses):

http://answers.google.com/answers/faq.html#postemail

Good luck with your linear algebra studies!

regards, mathtalk-ga
Subject: Re: To :livioflores-ga
From: mathtalk-ga on 02 May 2003 11:16 PDT
 
Hi, Joannehuang-ga and Livioflores-ga:

Perhaps I'm sticking my nose in where it's unwanted, but here are a
couple of hints on this question:

1) The proposition isn't really valid for k=0. (Why?)

2) For k > 0 consider the minimal polynomial of S.

regards, mathtalk-ga
Subject: Re: To :livioflores-ga
From: joannehuang-ga on 06 May 2003 09:33 PDT
 
To Mathtalk-ga:
Thank you !I will do the rating ASAP.

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  


Google Home - Answers FAQ - Terms of Service - Privacy Policy