View Question
Q: Matrices ( Answered,   0 Comments )
 Question
 Subject: Matrices Category: Science > Math Asked by: dooogle-ga List Price: \$25.00 Posted: 13 Jul 2005 04:08 PDT Expires: 12 Aug 2005 04:08 PDT Question ID: 542966
 Dear mathtalk. My question reads as follows: Let G be a matrix. Let x and y be column vectors. Define T(G) := 1/2 max_{i.j} \sum_k |G_{ik} - G_{jk}| Then, try prove (max_i y_i - min_i y_i) <= T(G)(max_i x_i - min_i y_i). (In the context, G happens to be stochastic and x and y are non-negative, but I don't think these properties are required for the above proof. Apparently, there is a proof of this question in E Seneta's 1981 Non-Negative Matrices and Markov Chains, but I haven't access to the book.) Many thanks. Dooogle Request for Question Clarification by mathtalk-ga on 13 Jul 2005 06:45 PDT Hi, dooogle-ga: Sorry, but I must be missing something here. G is a matrix and x,y are column vectors. It looks like T(G) is a scalar value depending only on G. But how then do we relate T(G) to a conclusion about x,y? Is Gx = y perhaps? regards, mathtalk-ga Clarification of Question by dooogle-ga on 13 Jul 2005 07:02 PDT Apologies! You're spot on: Gx = y Request for Question Clarification by mathtalk-ga on 14 Jul 2005 06:28 PDT Okay, thanks! A couple more Clarifications if you don't mind. The inhomogeniety of terms on the right hand side of the inequality is striking. One might have expected (max_i x_i - min_i x_i), so that the result says something about the "maximum variation" in column y being bounded by a scalar (dependent on G only) times the "maximum variation" in column x. My other question is whether by stochastic matrix you mean the row sums are 1 or the column sums are 1. I ask because many authors define it with row sums = 1 so that multiplication "on the left" xG preserves the interpretation of nonnegative rows with row sum 1 as probability distributions. regards, mathtalk-ga Clarification of Question by dooogle-ga on 14 Jul 2005 07:59 PDT Apologies again: the RH side ought indeed to read "max_i x_i - min_i x_i". As to the meaning of stochastic, the author is not clear, but I'd have thought row stochastic.
 Answer
 Subject: Re: Matrices Answered By: mathtalk-ga on 14 Jul 2005 21:49 PDT
 Hi, dooogle-ga: Let G be "stochastic" (a nonnegative matrix with row sums equal to 1), and let x,y be column vectors of equal length such that: Gx = y We will show that: (max y_i - min y_i) <= T(G) (max x_i - min x_i) i i i i where the components of x,y are x_i and y_i, respectively, and where: T(G) = (1/2) SUM max | g_{i,k} - g_{j,k} | k i,j the indicated sum being taken over the columns of G of the maximum "deviation" between any two entries of G in each column. Proof: First note that (max y_i - min y_i) = max (y_i - y_j). i i i,j It suffices therefore to produce an estimate of the form: (y_i - y_j) <= T(G) (max x_k - min x_k) k k because we can then take a maximum of the left hand side over all i,j to obtain the desired result. Now recall by the definition of matrix multiplication that: y_i - y_j = SUM g_{i,k} x_k - SUM g_{j,k} x_k k k = SUM (g_{i,k} - g_{j,k}) x_k k We have one small trick up our sleeves. Using the fact that the row sums of G are each equal to 1, we know that for any constant a: 0 = SUM (g_{i,k} - g_{j,k}) * a k Thus we can subtract this from the previous equality, getting: y_i - y_j = SUM (g_{i,k} - g_{j,k})*(x_k - a) k In particular we can estimate: (g_{i,k} - g_{j,k})*(x_k - a) <= max |g_{i,k} - g_{j,k}|*|x_k - a| i,j Making the skillful choice of a = (1/2)(max x_k + min x_k) results in: k k |x_k - a| <= (1/2) (max x_i - min x_i) i i for if x_k were above a, then: |x_k - a| = x_k - a <= (max x_i) - a i = (1/2)(max x_i) - (1/2)(min x_i) i i = (1/2)(max x_i - min x_i) i i and similarly if x_k were below a, then: |x_k - a| = a - x_k <= a - (min x_i) i = (1/2)(max x_i) - (1/2)(min x_i) i i = (1/2)(max x_i - min x_i) i i Putting these estimate together gives the desired inequality: y_i - y_j = SUM (g_{i,k} - g_{j,k})*(x_k - a) k = SUM max |g_{i,k} - g_{j,k}|*|x_k - a| k i,j = SUM max |g_{i,k} - g_{j,k}| * (1/2)(max x_i - min x_i) k i,j i i = (1/2) SUM max |g_{i,k} - g_{j,k}| * (max x_i - min x_i) k i,j i i = T(G) (max x_i - min x_i) i i QED * * * * * * * * * * * * * * * * * * * * * * Careful analysis of the steps in this proof will show that we did not use any facts about G except that its row sums are equal. To see that the result cannot be proven for arbitrary square matrices, even if we assume the nonnegativity of x,y and G, consider: Example: Let x = (1,0)', prime denoting transpose, and: / 4 0 \ G = | | \ 0 1 / so that y = Gx = (4,0)'. Now T(G) = (1/2)*(4+1) = 5/2, but: (max x_i - min x_i) = 1 i i (max y_i - min y_i) = 4 i i The inequality 4 <= T(G) * 1 is therefore not valid in this case. regards, mathtalk-ga Clarification of Answer by mathtalk-ga on 14 Jul 2005 21:58 PDT Oops, I dropped the inequality marks in moving the last few steps of the proof into final form. I should have written: Putting these estimates together gives the desired inequality: y_i - y_j = SUM (g_{i,k} - g_{j,k})*(x_k - a) k <= SUM max |g_{i,k} - g_{j,k}|*|x_k - a| k i,j <= SUM max |g_{i,k} - g_{j,k}| * (1/2)(max x_i - min x_i) k i,j i i = (1/2) SUM max |g_{i,k} - g_{j,k}| * (max x_i - min x_i) k i,j i i = T(G) (max x_i - min x_i) i i Sorry for the confusion! regards, mathtalk-ga Clarification of Answer by mathtalk-ga on 14 Jul 2005 22:53 PDT As a more substantive correction, I noticed that I interchanged the order of summation and maximization in the way I defined T(G). You wrote: T(G) = (1/2) max SUM |g_{i,k} - g_{j,k}| i,j k and I wrote: T(G) = (1/2) SUM max | g_{i,k} - g_{j,k} | k i,j The proof is probably a little simpler with your definition. We would again write the difference of two components of y as the row & column products, and introduce the parameter a as before (because two rows of G have equal sums): y_i - y_j = SUM (g_{i,k} - g_{j,k})*(x_k - a) k Estimate the right hand side by the triangle inequality, and choose a to be halfway between the maximum and minimum entries of x so that again: |x_k - a| <= (1/2) (max x_i - min x_i) i i and we have the estimate, after moving the 1/2 outside the summation: y_i - y_j <= (1/2) ( SUM |g_{i,k} - g_{j,k}| ) * (max x_k - min x_k) k k k By leaving i,j free on both sides of the inequality, we can now pass to the maximum over i,j for both simultaneously: (max y_i - min y_i) <= (1/2) max ( SUM |g_{i,k} - g_{j,k}| ) i,j k * (max x_k - min x_k) k k = T(G) * (max x_i - min x_i) i i Note that I've freely changed the "bound" variable for minimum and maximum components of x between i and k for (hopefully) clarity of notation. regards, mathtalk-ga
 Comments
 There are no comments at this time.
 Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service. If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you.
Search Google Answers for

 Google Home - Answers FAQ - Terms of Service - Privacy Policy