View Question
 Question
 Subject: Matrix algorithms Category: Science > Math Asked by: dvd03-ga List Price: \$75.00 Posted: 10 Aug 2004 07:45 PDT Expires: 09 Sep 2004 07:45 PDT Question ID: 385860
 I have asked a question at: http://answers.google.com/answers/threadview?id=379557 , but my investigations now probably warrant a new question, and I'd also like clarification on some of the points made so far. Consider the algorithm (1): r_(n+1) = c*r_n*M + (1-c)*v where: r_n is a row vector; c is a scalar such that 00; and the other variables are as before. Can we claim that (3) converges (Perhaps, again due to spectral radius)? And, if we can, how does the limit compare with the stationary distribution of (2)? Many thanks, Clarification of Question by dvd03-ga on 20 Aug 2004 06:45 PDT The question above is asked with a view to understanding the relation between Page & Brin's precise formulation of PageRank (the personalisation vector of which appears to be a scalar factor too large, given the supposed Markov underpinnings - Page and Brin's limit vector does NOT necessarily have one norm equal to one) and a formulation which does fit in with the claimed Markov framework (multiplying the personalisation vector by 1/n). Thanking you in anticipation. NB When I write "certainly converges", I'm assuming that A is irreducible and aperiodic. Request for Question Clarification by mathtalk-ga on 09 Sep 2004 05:56 PDT Hi, dvd03-ga: I was unable to find a "good" connection between the solution to the second problem: r_(n+1) = c*r_n*M + (1-c)*s*v and that of the eigenvalue problem. Existence and uniqueness of the solution to the above were established, and at one point I was hopeful that solving it would shed light on the equilibrium vector of the Markov chain. However the information I found from it can be obtained without the "expense" of solving the large system (I - cM) r = (1-c)s*v. best wishes, mathtalk-ga
 There is no answer at this time.

 Subject: Re: Matrix algorithms From: kprema-ga on 16 Aug 2004 20:24 PDT
 Hi: Let us write out an explicit eqn for r(n). r(1) = c*M*r(0) + (1-c)*v; r(2) = c^2*M^2*r(0) + c*(1-c)*M*v + (1-c)*v; r(3) = c^3*M^3*r(0) + c^2*(1-c)*M^2*v + c*(1-c)*M*v + (1-c)*v; and so on. In fact, it is not too hard to show that the general eqn is: r(n) = c^n*M^n*r(0) + (1-c)*\sum_{i=0}^{n-1} c^i*M^i*v. Here, \sum_{i=0}^{n-1} denotes the sum of terms from i=0 to i=n-1 (I am using LaTeX commands---I am sure you who are familiar with it). Now, let us look at this expression for r(n). Clearly, as n tends to infinity, c^n*M^n*r(0) tends to zero because c*M has a spectral radius that is strictly less than unity. What about the summation term? Well, we know that \sum_{i=0}^{\infty} (c*M)^i = (I-c*M)^{-1}. Note that, this inverse is guaranteed to exist because the spectral radius of c*M is strictly less than unity. In essence, I believe the eqn converges to \lim_{n\to\infty} r(n) = (1-c)*(I-c*M)^{-1}*v. Of course, there is always the possibility that all what I have said above is utter nonsense. kprema
 Subject: Re: Matrix algorithms From: kprema-ga on 17 Aug 2004 07:21 PDT
 Simply continuing on my previous comment: I do now realize that this discussion has been going on for a while (I am a newbee for this). So, it is quite possible that the converging solution I provided has already been given (although I am perhaps using a more direct method). Nevertheless, let me comment on your (3), i.e., r_(n+1) = c*r_n*M + (1-c)*s*v where s is a scalar such that s>0; and the other variables are as before. If you follow the development I previously indicated, I think one gets the solution to (3) as \lim_{n\to\infty} r(n) = (1-c)*s*(I-c*M)^{-1}*v. Am I right? kprema
 Subject: Re: Matrix algorithms From: mathtalk-ga on 17 Aug 2004 09:16 PDT
 Your analysis is correct (but notice the direction of the row-matrix multiplication). There are two issues. One is that _if_ the iteration converges, the limit r would have to satisfy: r = c*rM + (1-c)*s*v whence: r*(I - cM) = (1-c)*s*v r = (1-c)*s*v (I - cM)^-1 The other issue is whether the iteration converges. Apart from the direction of the matrix multiplication, your inductive formula for r_n is correct and converges as a consequence of the spectral radius < 1. Note that (I - cM) could be invertible (and hence used to define r above) without the spectral radius being less than 1 (we only need that 1/c doesn't belong to the eigenvalues or "spectrum" of M). But, as your analysis shows, the spectral radius plays a key role in convergence because without that either the terms c^n r_0 M^n or the matrix series might grow unboundedly. regards, mathtalk-ga
 Subject: kprema From: dvd03-ga on 20 Aug 2004 08:09 PDT
 Thank you for your thoughts. Just a quick question: How do we know that: \sum_{i=0}^{\infty} (c*M)^i = (I-c*M)^{-1} ? Thanking you in anticipation.
 Subject: Re: Matrix algorithms From: kprema-ga on 20 Aug 2004 10:10 PDT
 Hi dvd03: This is the matrix equivalent of the "geometric series." I will use the same technique to prove it: Let (1) S_N = \sum_{i=0}^N A^i = I + A + A^2 + ... + A^N, where A is a square matrix and I is the corresponding identity matrix. Then (2) A*S_N = A + A^2 + A^3 + ... + A^N + A^{N+1}. Subtract (2) from (1): (3) (I-A)*S_N = I-A^{N+1}. So, if A has no eigenvalue at 1, we get (4) S_N = inv(I-A)*(I-A^{N+1}). What happens when N tends to infinity? Well, if the absolute values of all eigenvalues of A are strictly less than 1, we know that A^{N+1} would vanish in the limit. Then, we get (5) S_{infty} = \sum_{i=0}^{infty} A^i = inv(I-A). That's basically it. Hope it helps. kprema
 Subject: Re: Matrix algorithms From: dvd03-ga on 21 Aug 2004 02:54 PDT
 Many thanks. You've been very helpful.