Google Answers Logo
View Question
Q: Probability in N dimensions ( Answered 5 out of 5 stars,   1 Comment )
Subject: Probability in N dimensions
Category: Science > Math
Asked by: grigri9-ga
List Price: $25.00
Posted: 19 Jan 2004 15:26 PST
Expires: 18 Feb 2004 15:26 PST
Question ID: 298154
First off, I'm a high school student in 10th grade. I have no problem
if the answer to this question is mathematical so long as it's math
that I have learned. If unsure feel free to look at a description of
my current level of math at I have also
completed MQ1-4.

Anyways, I was recently reading Clifford Pickover?s book "Surfing
through hyperspace: Understanding Higher Universes in six easy
lessons". One of the questions near the end of the book was

If an ant were placed in an infinitely long tube (representation of 1
dimensional space) and were to execute an random walk either forward
or backward for eternity what would be the probability that it would
return to it's starting position?

The book gives an answer of 1 probability meaning it would definitely
return to its original position at one point. The book goes on to say
that the same thing would happen in an infinitely large piece of paper
where the ant could move in 4 directions. According to the book the
3rd dimension is the first dimension where the ant has a chance of
getting lost and not returning to it's original position. I believe it
said .34 probability of getting lost but I have to find my copy of the

What I want to know is WHY the ant will definitely return to its
starting point in 1-D and 2-D space. I need a mathematical explanation

Also, links to online and offline resources where I can continue to
research this topic would be greatly appreciated.

Request for Question Clarification by mathtalk-ga on 20 Jan 2004 13:03 PST
Hi, grigri9-ga:

Thanks for posting the interesting question.  How comfortable/familiar
are you with matrix multiplication?

thanks in advance,

Clarification of Question by grigri9-ga on 20 Jan 2004 16:41 PST
I am pretty comfortable with matrix multiplication. I?ve used it in
solving some linear algebra problems in class.

I found my copy of Surfing Through Hyperspace, here's the exact
wording of the problem

"this ant is executing an infinite random walk; that is, it walks
forever by moving randomly one step forward or one step back in the
tube. Assume the tube is infinitely long. What is the probability that
the random walk will eventually take the ant back to it's starting

Also, according to the book the probability of the ant returning to
it's starting position in the 3-D world is 0.34

The formula given for the probability of the ant returning to it's
original starting position in higher dimensions is 1/(2n) where N is
the number of dimensions. 1/(2n) is also the probability of the ant
making it back on it's second step since according to him if it's
doesn't make it back on it's second step then it is lost and won't

Request for Question Clarification by mathtalk-ga on 20 Jan 2004 17:03 PST
Let me clarify one thing, and that is the 1/(2n) value.  This is not
the long term probability of returning to the starting point.  This is
the probability of moving in any one given direction (there are 2n
possibilities in n dimensions when the sign is taken into account) in
a classical random walk.

If you understand about matrix multiplication, I think I can give you
a pretty good idea of what is happening in dimensions one and two. 
There will be a little hand waving to explain why things begin to have
a positive probability of not returning in three dimensions and
higher.   Of course there's always at least a probability of returning
on the second step, i.e. 1/(2n) chance of reversing the direction then
chosen on the first step.

regards, mathtalk-ga

Request for Question Clarification by mathtalk-ga on 24 Jan 2004 12:15 PST
Hi, grigri9-ga:

Let's start with an oversimplified problem that introduces a bit of
matrix notation.  This will give you a chance to judge whether my
approach to solving the one- and two-dimensional classic random walk
problems is likely to be at an appropriate level.

Suppose that instead of being in an infinite tube, our Ant is trapped
in a tube one step long and closed at both ends.  As soon as a step
forward is taken, the end is reached and only a step backward is
possible at that point.  Similarly after taking a step backward, the
"beginning" is reached but since the tube is also closed here, only a
step forward is then possible.

The circumstances are trivial in that our Ant will inevitably switch
positions from one end of the tube to another, but it gives us the
opportunity to develop our notation with a concrete picture in mind. 
We will need to talk about two sorts of probabilities: one the
probability of the Ant being in a certain place at a certain time, and
the other the probability that the Ant will at some future time return
at least once to the starting point.

We will assume some initial conditions.  The point where the Ant
starts is called the origin and is to be labelled 0.  In order to
avoid confusion about whether the Ant returns to the origin 0 in the
future or merely was there at "time 0", we will go ahead and stipulate
that at time 1 the Ant moves to location 1 (the far end of the tube in
our oversimplified model).  Now we can "condition" all of our
probability calculations on these assumptions.

Let's use lowercase p's for the first sorts of probabilities:

p_k(t) = Pr( Ant is at location k at time t )

and uppercase P's for the second sorts of probabilities:

P_k(t) = Pr( in future Ant returns to origin | Ant at location k at time t )

Note that the second kind of probability is properly called a
conditional probability.  That is we are asking about the probability
that the Ant returns to the origin _given_ an assumption about where
the Ant is at time t.

We have to be mildly obsessive about whether or not the Ant has
already returned to the origin since this is a "game over" state; once
the Ant returns to the origin, any subsequent travels by the Ant can
never undo the fact that it did return at least once.  In probability
theory (more precisely, the theory of Markov processes), such a state
is considered "absorbing".  Ants go in but they don't come out (of the
state of having returned to the origin) even if locations continue to

So by our assumptions the distribution of possible locations for the
Ant at various times is completely determined:

                      /  [0 1] if t is odd
[p_0(t) p_1(t)]  =   {
                      \  [1 0] if t is even

The answer to whether the Ant will eventually return to the origin is
also trivial with these assumptions, since here the Ant always returns
to the origin at t = 2.

But let's use the opportunity to push our notation a bit further. 
Suppose our Ant is like Werner Heisenberg's famous Cat and has only a
probabilistic location.  That is, suppose that at time 0 the Ant has a
probability p of being at location 0 and probability q = p-1 of being
at location 1.  So the probability distribution vector for the Ant's
future location looks like this:

                      /  [q p] if t is odd
[p_0(t) p_1(t)]  =   {
                      \  [p q] if t is even

The point we'd like to make here is that we can model the transitions
from one location to another by a matrix of probabilities M, called
the probability transition matrix:

[p q] M = [q p]

where M = [0 1]
          [1 0]

M has columns that encode the probability of going into a particular
location from each of the possible locations that the Ant may
currently occupy.  The general formula for the Ant's future likelihood
of locations is then:

[p_0(t) p_1(t)] = [p q] M^t

where time t is some positive integer power for the probability
transition matrix M, ie. the number of steps taken.

This formula is general enough to apply to our "real" problems, albeit
with a longer vector to represent the (infinite number of) additional
locations and a corresondingly bigger matrix.

Where in the simple case we had M^2 = I the identity matrix, the
matrix M for the "classic" random walk in one dimension (equal step
sizes) is now going to look like this:

 0   1/2   0    0    0    . . .
1/2   0   1/2   0    0    . . .
 0   1/2   0   1/2   0    . . .
 0    0   1/2   0
 0    0    0          .
 .                        .
 .                            .
and we will not have a simple identity for M^2 or any power of M for that matter.
We will also switch our focus from the location probabilities p_k(t)
to the (conditional) probabilties of return P_k(t).  Note that in this
respect P_k(t), because it asks about the probability of a future
return, doesn't actually depend on the particular value of t.  If the
Ant is located at k at any time, the probability of a future return
given that current location is always the same regardless of the time
(though we must still allow a possible dependence on location k).

Please review these notes carefully and ask for clarifications of them
if you feel that would be helpful before we proceed to the Answer.

regards, mathtalk-ga

Clarification of Question by grigri9-ga on 27 Jan 2004 08:14 PST
Hi Mathtalk-ga,

I'm sorry that my response took so long but my ISP wasn't working and
I've been disconnected from the Internet for a while. What you wrote
was definitely at the right mathematical level for me but I do have a
few questions.

In the following formula you wrote:

P_k(t) = Pr( in future Ant returns to origin | Ant at location k at time t ) 

But isn?t K a constant since where simply looking for the time at
which the ant will return to point 0

Also, in this formula shouldn?t the location 1 be positive or negative
one since the ant can go either forward or backward.

/ [0 1] if t is odd 
[p_0(t) p_1(t)] = {
\ [1 0] if t is even 

Lastly in the matrix you gave I would like to know why it is like this

 0     1/2    0     0    0  ?
1/2    0     1/2   0    0  ?
 0     1/2     0   1/2  0  ?
 0      0     1/2   0
 0      0       0            .
 .                               .
 .                                 .

and not like this

 0     1/2    0    1/2  0  ?
1/2    0     1/2   0    0  ?
 0     1/2     0   1/2  0  ?
1/2    0     1/2   0
 0     1/2       0             .
 .                               .
 .                                 .

Clarification of Question by grigri9-ga on 27 Jan 2004 08:17 PST
woops the second formula I posted copyed wrong here's what I meant.

                   / [0 1] if t is odd 
[p_0(t) p_1(t)] = {
                   \ [1 0] if t is even 

Shouldn't the "1" in p_1(t) be a positive or negative one since we
don't know if the ant will go forward or backwards.

Clarification of Question by grigri9-ga on 27 Jan 2004 08:45 PST
Nevermind about the matrix I put up, I understand that it obviously
doesn't work since I'm saying for example that there is a 1/2
probabiltiy of me going from point 0 to point 3. But, shouldn't the
transition matrix be like this since we lave 3 points in our closed
tube and not two.
TO  [

Clarification of Question by grigri9-ga on 27 Jan 2004 09:17 PST
Nevermind about the matrix I put up, I understand that it obviously
doesn't work since I'm saying for example that there is a 1/2
probabiltiy of the ant going from point 0 to point 3. But, shouldn't
the transition matrix for our closed tube be like this since we have 3
points in our closed tube and not two.
       -1  0  1
   -1 [ 0 1/2 0 ]
TO  0 [ 1  0  1 ]
    1 [ 0 1/2 0 ]

Also, in the first column of the probability transition matrix for the
infinite tube I only see the probability of moving from point 0 to
point 1 and not the probability of moving from point 0 to point -1.

The matrix you made is only showing probability in positive points. Is
that because we are assuming that the ant first moves to point 1 so
the only way to reach the negative points would mean you have to cross
the origin?

Request for Question Clarification by mathtalk-ga on 27 Jan 2004 14:55 PST
Actually I made the closed tube super-simple, which just the two
positions at each end.  Therefore the transition matrix for that was
2x2 with no "absorbing" state to stop the ant going back and forth.

You are correct that the "infinite" matrix I proposed amounts only to
the transitions for the positive locations, since I'm now going to
treat the zero location as an "absorbing" state (once the Ant gets
there, we don't care what else is going to happen; The Return of the
Ant, or something of the sort).

The reason I only put up entries for the positive locations is for
simplicity.  The Ant's first step is in the positive direction or in
the negative direction.  Either way the Ant must stay on that side of
the origin so long as there is no return to the origin.  The case when
the initial step is onto the negative side is symmetric with the case
of stepping onto the positive side, so the probability of return is
equal for both cases.  We only need to solve one of them.

It sounds like your questions reflect an ability to understand the
math pretty well.  With your blessing I will attempt to provide you
with a satisfactory sketch of the proof that the Ant will always
return (we should really say, return with probability 1, which isn't
quite the same thing) in dimensions 1 and 2, and has a positive
probability of never returning in dimensions 3 and higher.

regards, mathtalk-ga

Clarification of Question by grigri9-ga on 27 Jan 2004 15:05 PST
Please go right ahead with the sketch, you most certainly have my
blessing and I'm sure I'll enjoy reading it.
Subject: Re: Probability in N dimensions
Answered By: mathtalk-ga on 29 Jan 2004 23:39 PST
Rated:5 out of 5 stars
Hi, grigri9-ga:

The original proof that a classic random walk (here typified as the
wanderings of an Ant) returns to its origination point with
probability 1 in one- and two-dimensions, but with probability less
than 1 in all higher dimensions, is due to George Pólya (1921).

A relatively elementary treatment of Pólya's theorem is available in a
Carus Monograph by Peter G. Doyle and J. Laurie Snell:

[Random Walks and Electrical Networks (Carus Mathematical Monographs, No 22)]

This is a longer version of Peter Doyle's thesis:

[Application of Rayleigh's short-cut method to Pólya's recurrence problem]

and it also is freely available on the Web (under the terms of the GNU
General Public License).  See:

[arXiv:math - Random Walks and Electric Networks]

where you can download the monograph in Postscript, PDF, and various other formats.

There are exact formulas for the "probability of return" in dimensions
d > 2.  See for example the formulas here:

[Pólya's Random Walk Constants]

and in a bit greater depth here for d = 3:

[Mathsoft Constants: Pólya's Random Walk Constant]

References: G. Pólya, Über eine Aufgabe der
Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Stassennetz,
Math. Annalen 84 (1921) 149-160.


I recall an avant-garde sci-fi story in which the author told first
the ending of the story, then the middle, and finally its beginning.

The Reader may see this math proof as presented in a similar fashion,
but I'm simply trying to tie up all the loose threads properly.  I
leave the judgement of success on that up to you.

Our Ant's steps mark equally spaced points along a line.  Picking one
of these to be the origin (zero), we number the remaining points
forward as 1, 2, 3, etc., and those behind as -1, -2, -3, and so
forth.  Wherever the Ant is at one time, the next step goes forward or
backward, each with one half probability and with complete
independence from the Ant's prior travels.

Define the "little p" probability p(k;t) as the chance of the Ant
being in location k at time t.  Of course the actual value of this
probability will depend on our assumptions about where the Ant was
previously.  We might, for example, specify as one "initial condition"
the Ant to be in location k=0 at time t=0.  Let's us regard this for
the moment though as merely one of many possible intial conditions
that can be specified and look for the general principles which relate
"little p" at earlier times to later times.

"Big P" probability P(k;t) we define as a conditional probability:
given that the Ant is in location k at time t, what chance does it
subsequently have to reach the origin 0 (either _at_ time t or after)?

It will turn out that P(k;t) = 1 for all k and t.  Once we show that,
our proof is done.  Here's why:

If the Ant starts at the origin at time t = 0, then at time t = 1 the
Ant either is at k = 1 or else at k = -1.  Those two cases are equally
likely, disjoint, and symmetric.  Combining them gives us the desired

Pr( an Ant who starts at origin eventually returns there ) 

   = Pr( Ant goes to k=1 at t=1 ) * P(1;1)

   + Pr( Ant goes to k=-1 at t=1 ) * P(-1;1)

   = (0.5 * 1)  +  (0.5 * 1)
   = 1



In the middle of the proof we show P(k;t) = 1.  Here's what we will
know, once we get there, about P(k;t):

a) P(k;t) is a (conditional) probability, so it's between 0 and 1.

b) P(0;t) = 1 (as the Ant thus reaches the origin).

c) For nonzero k, we have these equations:

P(k;t) = 0.5 * ( P(k+1;t+1) + P(k-1;t+1) )

which will be explained shortly.

Think for a moment about how P(k;t) will depend on time t.  Is there
any difference in the Ant's ultimate prospect of reaching the origin
from location k depending on whether the Ant is there at time t or
some other time?

The answer is no, because whenever the Ant is at that location, an
eternity of future steps await.  Thus we drop the apparent dependence
of P(k;t) on time and write more simply that:

P(k) = 0.5 * ( P(k+1) + P(k-1) ) for all nonzero k,

and also that P(0) = 1.

Now the case when k < 0 is just the mirror image of when k > 0, and as
was discussed at one point, the Ant, having committed to one side of
the origin, cannot reach the other side without passing back to the
origin.  So we can stipulate by symmetry that for all k:

P(k) = P(-k)

and then focus solely on the cases k > 0.

Let's do a clever bit of algebra and rewrite for k > 0 the equation above as:

2 * P(k) = P(k-1) + P(k+1)

2 * P(k) - P(k-1) = P(k+1)

P(k) - P(k-1) = P(k+1) - P(k)

This tells us that the difference between P(k) in two consecutive
locations is always the same (by induction if we want to be thorough).
 In other words the values P(0), P(1), P(2), etc. form an arithmetic
progression because the consecutive terms have a common difference. 
If we call that common difference D, then by the formula for an
arithmetic progression and by the fact that the inital value P(0) = 1,

P(k) = 1 + kD for all k > 0

What are the possible values for D?  If D > 0, then we'd have:

P(1) = 1 + D > 1

and that's not possible (for a probability).  Furthermore if D < 0,
then eventually for large enough k, we'd have:

P(k) = 1 + kD < 0

(namely when k > -1/D), and again it's impossible to have a negative probability.

So the only possible value for D is 0, and thus P(k) = P(k;t) = 1 for
all k and all t.


Great, we've worked our way back to the beginning.  Just a few points
to clear up and we'll be done with the recurrence proof for one

Recall that the Ant moves forward (resp. backward) with probability
1/2.  So if the Ant is in location k at time t, then it has 1/2 chance
of being in location k+1 at time t+1 and the other 1/2 chance of being
in location k-1 then.

Look at it from the other perspective.  The chance that an Ant will be
in location k at time t+1 is equal to half of the chance that it was
in location k+1 at time t plus half of the chance that is was in
location k-1 at that time.  In terms of an equation for the "little p"

p(k;t+1) = 0.5 * p(k+1;t) + 0.5 * p(k-1;t)

This relationship allows us to determine the probability distribution
for the Ant's location at time t+1 from the similar distribution of
location at time t, or by repetition to go forward in time say from
t=0 to any future step.

Here a mathematician might introduce matrix multiplication as a way of
representing generally the "transition probabilities" from one
location to another.  The probability distributions for the Ant's
location would be expressed as a kind of vector v_t (infinitely long,
but with only finitely many nonzero entries and thus well behaved as
far as arithmetic goes).  The "Markov process" matrix M which would
multiply such a vector, giving:

v_{t+1} = M v_t

would itself then need infinitely many rows as well as columns (both
extending indefinitely in both directions!).  We sketched such a
possibility in the earlier Clarifications and mention it now only for
insight.  We can proceed to establish the necessary properties of
P(k;t) directly from the equations above.

Keeping in mind the definition of P(k;t), the probability of reaching
the origin (now or in the future) if the Ant is in location k at time
t, the specialness of case k = 0 (the Ant is already at the origin)
should be apparent.  There is no "circularity" of the proof in saying:

P(0;t) = 1

though it is natural I think to be suspicious of this point.  The
above merely states that if the Ant is in location 0, it will reach
the origin (then or later).  It is not about "returning" to the
origin, because as a conditional probability, it assumes only
knowledge of the Ant's location at time t.  Any prior position is
moot.  The issue of return to the origin will only be addressed in our
final argument, where we have the Ant go first either to location 1 or
-1. It is from there that the Ant (with probability 1) "returns" to
the origin at a future step.

But what about P(k;t) for nonzero k?  A little thought should convince
us that here P(k;t) must obey _almost_ the same relation as cited
above for the "little p" terms.  The change may seem subtle, but
consider the Ant's prospects for reaching the origin if in location k
at time t.  Since the Ant has "yet" to reach the origin for nonzero k,
we consider that the next step will take the Ant to location k+1 or
k-1, with probability 1/2 each.  Thus:

P(k;t) = 0.5 * P(k+1;t+1) + 0.5 * P(k-1;t+1)

To restate this observation in words, the chances that Ant will reach
the origin are one half times the chances of reaching from location
k+1 at the next time step plus one half times the chances of reaching
from location k-1 at the next time step.  Note that here the time t
appears to run in reverse to its appearances in the "little p"
equations, with t+1 on the right hand side of the equations rather
than on the left.

With this observation the equations c) needed in the middle of our
proof are established for nonzero k.  We have come "full circle" on
the one dimensional proof.


The two dimensional random walk is more delicate, but not beyond
giving a fairly deep insight based on secondary school mathematics. 
For example, where in the one dimensional case our analysis hinged on
an arithmetic progression, in the two dimensional case the facts can
be boiled down to the divergence of a harmonic series.

The evening has grown a bit late, and I must beg the Reader's
indulgence as I take some rest now.  I will return ???


Clarification of Answer by mathtalk-ga on 31 Jan 2004 11:05 PST
Often it both simplifies a proof and improves our understanding of it
if we spot how what we first set out to prove is really only part of a
larger truth.

Here, for example, we set out to prove that if the Ant starts at the
origin, then the probability of returning to the origin is 1.

We wound up proving more than that, namely that no matter where the
Ant starts, the probability of reaching the origin is 1.  That's the
implication of P(k) = 1 for all k, since P(k) is simply the
conditional probability of reaching the origin given that the Ant is
(initially) in location k.

This makes sense because the Ant, even if blessed with memory and
"happy" at the return to its old stomping grounds, doesn't rely on
that memory at all in its random movements.  So in fact we are really
showing that the Ant's chances of going from point a to point b are
always 1 (given an indefinite amount of time).

But there's actually another useful way to "extend" the theorem
without proving anything more, and that's by way of counting how many
times the Ant will visit the origin.  Let's count the initial starting
point as visiting once.  Then the number of times the Ant visits the
origin is (with probability 1) at least 2.

But if the Ant makes 2 visits, why not 3?  or 4?  or infinitely many?

Bingo.  What we have actually proved (or almost so, perhaps a few more
words are needed) is that the Ant is expected to visit the origin an
infinite number of times.

And surprisingly it may be a bit easier to prove that the Ant will
visit the origin infinitely often than just to prove it returns once. 
In any event it provides a path to settling the cases in higher


Let V be the random variable that counts the total number of times the
Ant visits the origin, given that the Ant starts there.  Define the
expected value of V:

v = E(V) = SUM n * Pr( V = n ) FOR n = 1 to oo

To connect this with our previous work, we need to define:

u = Pr( an Ant who starts at origin eventually returns there )

Although this was our central focus before, we somehow never defined
it as an unknown!  But once we have it, we can make this connection:

Pr( V = n ) = (1 - u) * u^(n-1)

That is, counting the initial starting point as one visit, the chance
of visiting exactly n times is the product of the chances of returning
n-1 times and then times the chance of never returning, or in other
words "escaping".

Plugging this expression for Pr(V = n) back into the equation for v:

v = SUM n (1 - u) u^(n-1) FOR n = 1 to oo

Although this looks a bit ungainly, it can be simplified by way of
geometric series to this:

v = 1/(1 - u)

This relationship is actually valid in ALL dimensions.  If we look
back over this brief section, we made no mention of the number of
dimensions, and the computation generally establishes a reciprocal
relation between the expected number of times v that the Ant will
visit the origin and (1 - u), the probability of "escaping" (to
infinity and beyond, to borrow Buzz Lightyear's phrase).

Simply put, u = 1 if and only if v is infinite.

In one and two dimensions the value of v is infinite (and the value of
u is correspondingly 1).  In three or more dimensions the value of v
is finite, and the value of u strictly less than 1.  Said another way,
in more than two dimensions, there is a positive probabilty of escape,
(1 - u) > 0.


Let's set about calculating v in two dimensions.  As promised earlier,
we will make our way to a harmonic series, which is known to diverge
to infinity, and this will prove v is infinite (therefore u = 1).

In case the divergence of the harmonic series is not one of the facts
that comes readily from the Reader's store of mathematical knowledge,
let's review.

"The" harmonic series is this:

1 + 1/2 + 1/3 + 1/4 + ... = SUM 1/n FOR n = 1 to oo

but we can properly apply the term harmonic series to the summation of
reciprocals of any (nonzero) arithmetic progression.

These always diverge, although there is an interesting and useful
variant which converges (conditionally), the alternating harmonic

1 - 1/2 + 1/3 - 1/4 + ... = ln(2)

But let's keep our noses to the grindstone, shall we?  Why does "the"
harmonic series diverge?  (Other harmonic series can then be shown to
diverge by virtue of a "comparison" to this one.)

The partial sums of the harmonic series are monotonically increasing,
because all its terms are positive.  Therefore showing that a
subsequence of thse partial sums tends to infinity, shows also that
the entire series diverges to infinity (unconditionally).

The idea is simply to group terms like this:

1 + (1/2) + (1/3 + 1/4) + (1/5 + 1/6 + 1/7 + 1/8) + ...

Each group of terms, when inwardly combined, contributes at least 1/2.
 Since there are infinitely many of these, the summation is infinite.


We need an alternative formula for v to show that it is infinite,
because the one we have is already "spent" in providing the
relationship to u.

Let's revise the "little p" notation for the two dimensional case and
employ two space coordinates j,k:


Furthermore, to set the stage for our calculations, we assume:

p(0,0;0) = 1

p(j,k;0) = 0  if otherwise (j,k not both zero)

Now the alternative formula we want is this:

v = SUM p(0,0;n) FOR n = 1 to oo

Intuitively this says that, to the average count v of all the times
that the Ant visits the origin, each p(0,0;n) contributes a
probability at time t=n that the Ant would be visiting then.  This
intuition can be made rigorous by expressing the random variable V as
the sum of (infinitely many) random variables which count only whether
the Ant visits the origin at time t=n.  As v is the expected value of
V, so too are the probabilities p(0,0;n) the expected values of the
"boolean" function which is 1 if the Ant visits the origin at time t=n
and is 0 otherwise.


Because the Ant starts at the origin, it happens that after an odd
number of steps the Ant will always be someplace else.  If a return to
the origin is going to take place, it must occur in an even number of
steps.  We therefore narrow our focus to the cases t = 2n.

At each step there are 4 equally likely outcomes in two dimensions, so
to figure out p(0,0;2n) we need to sort out which of the 4^(2n)
outcomes would bring the Ant back to the origin.

The requirement is that there have to be an equal number of "up" steps
as "down" steps, and also an equal number of "left" steps as "right"

How many of the 4^(2n) outcomes satisfy these conditions?  A familiar
formula from combinations and permutations comes to our rescue.  If
all the steps were (somehow) distinguishable, then there would be
(2n)! arrangements of them.  But if we have k "up" steps, then there
must be k "down" steps as well as (n-k) "left" and (n-k) "right"
steps, and each different kind of step forms an interchangeable group.
 So the answer is:

SUM (2n)! / ( k! k! (n-k)! (n-k)! ) FOR k = 0 to n

Thus, since each distinct outcome has probability 1/4^(2n):

p(0,0;2n) = 4^(-2n) SUM (2n)! / ( k! k! (n-k)! (n-k)! ) FOR k = 0 to n

Amazingly this can be simplified to:

p(0,0;2n) = 4^(-2n) * C(2n,n)^2

where C(2n,n) means "combinations of 2n things taken n at a time":

C(2n,n) = (2n)!/ (n! n!)

I'll omit such simplification for now as there's actually a cute
shortcut to obtain this in two dimensions from one, which I'll explain


All that remains is to connect the terms p(0,0;2n) with the terms of a
harmonic series.

To do this requires estimating C(2n,n), and for this we appeal to
Stirling's approximation of the factorial:

n! ~ SQRT(2*pi*n) e^(-n) n^n

[Stirling's Approximation]

[Stirling's Approximation (with error terms)]

A proof of Stirling's approximation lies in the realm of calculus, as
does a justification that this is "good enough" for our purposes.  But
I think it fits in reasonably well with a secondary school math
background, because bright students are naturally curious about
rapidly the factorial function grows, and this estimate makes a
precise connection between the factorial and exponential growth.

Now let's tackle our estimate:

C(2n,n)/(4^n) = (2n)! / ( n! n! 4^n )

                SQRT(4*pi*n) e^(-2n) (2n)^(2n)
              ~ ------------------------------
                 (2*pi*n) e^(-2n) n^(2n) 4^n
              = 2 * SQRT(pi*n) / (2*pi*n)
              = 1/SQRT(pi*n)

Almost done now:

p(0,0;2n) = (C(2n,n)/(4^n))^2

          ~ ( 1/SQRT(pi*n) )^2
          = 1 / (pi*n)

Thus our summation for v would be "like" the harmonic series:

v = SUM p(0,0;2n) FOR n = 1 to oo

  ~ SUM 1 / (pi*n) FOR n = 1 to oo
  = (1/pi) SUM 1/n FOR n = 1 to oo

Recall from the earlier discussion that since v is infinite, it means
the probability of "escape" (1 - u) is zero, and thus the probability
of return u is 1.


I want to close our discussion of the two-dimensional random walk but
pointing out that, by a clever change of perspective, it is identical
to a matched pair of one-dimensional random walks.

The Reader has been visualizing (I hope) the lattice on which the Ant
wanders in two dimensions in a fashion like this:

   |   |   |
   |   |   |
   |   |   |
   |   |   |

where the origin is suggestively labelled 0.

Rotate your perspective by 45 degrees, and the picture becomes:

        \ /
     \ /   \ /
      *     *
  \ /   \ /   \ /
   *     O     *
 /   \ /   \ /   \
      *     *
    /   \ /   \
       /   \

From this point of view each of the Ant's steps is both an up/down
choice and a left/right choice, and the vertical and horizontal
motions are independently random.

In one dimension the number of ways to return to the origin in n steps
is clearly C(2n,n), ie. an equal number of steps in both directions. 
The probability of doing so is thus:

p(0;2n) = C(2n,n)/(2^(2n))

This shortcut makes it clear that the two-dimensional probability of
being at the origin at time t = 2n is then the product of the
respective probabilities in both dimensions, i.e.

p(0,0;2n) = (C(2n,n)/(4^n))^2

just as we'd previously claimed.


Again I must apologize for running short of time, but I will continue
next with a discussion of the three-dimensional case and above.  The
forecast is for a great deal of hand-waving, so bring your raincoats
and rubbers (galoshes) for protection.

regards, mathtalk-ga

Request for Answer Clarification by grigri9-ga on 31 Jan 2004 12:35 PST
Since I'm at a ski resort I have the necessary rain gear for our foray
into the 3rd dimension.

Clarification of Answer by mathtalk-ga on 01 Feb 2004 09:10 PST
Appendix for two-dimensional random walk

We refer above to simplifying the formula for v:

v = SUM n (1 - u) u^(n-1) FOR n = 1 to oo

"by way of geometric series" to this:

v = 1/(1 - u)

The purpose of this short appendix is to make good on this claim by
showing how it can be done (though not necessarily without my morning

One often encounters a summation that varies the well known geometric series:

1/(1 - r) = SUM r^(n-1) FOR n = 1 to oo

where |r| < 1, by throwing in a factor of n:

SUM n r^(n-1) FOR n = 1 to oo

What to do?  I can never remember, so I wind up writing the terms in a
triangle.  We have one of the first (n=1), two of second (n=2), etc.,
and these may be arranged like this:

  1  r  r² r³ . . .
     r  r² r³
        r² r³

Recalling that unconditional convergence (for |r| < 1) means never
having to say you're sorry for rearranging the terms of a series, we
can add up first going across the rows (instead of first going down
the columns), and get:

 1 / (1 - r)
 r / (1 - r)
 r²/ (1 - r)
 r³/ (1 - r)

So we have another geometric sum, and after adding these up:

 (1/(1 - r)) / (1 - r) = 1/(1 - r)^2

Thus 1/(1 - r)^2 = SUM n r^(n-1) FOR n = 1 to oo.  

Another approach, purely algebraic, is to replace n in the original
summation by its own sum, n = SUM 1 FOR k = 1 to n, and work out the
double sum through a change in the order of summation.  But the idea
is basically contained in the picture shown.

Finally, let's use that to do the simplification here:

v = SUM n (1 - u) u^(n-1) FOR n = 1 to oo

  = (1 - u) SUM n u^(n-1) FOR n = 1 to oo

  = (1 - u) * 1/(1 - u)^2

  = 1/(1 - u)

as desired.

Clarification of Answer by mathtalk-ga on 07 Feb 2004 20:25 PST

I noticed a mistake in my limits of summation for v, the alternative
expression, which should begin with n = 0 (the starting time), like

"Now the alternative formula we want is this:

v = SUM p(0,0;n) FOR n = 0 to oo"

[Previously I'd started the summation at n = 1, but noted correctly
above that the summation should include the "initial condition"
p(0,0;0) = 1.]

-- mathtalk-ga
grigri9-ga rated this answer:5 out of 5 stars
Great answer I haven't had a chance to go through all of it but it
looks to be really well writtten and you did a really great job.

Subject: Re: Probability in N dimensions
From: hedgie-ga on 28 Jan 2004 05:40 PST
Hi grigri9

  I did look at your (well formulated) question.

 Result is well known, and this is one way to state it:
 >for n >= 3, all states in  Markov chain are
>transient, whence the probability that the "random walker" will ever


>to the origin is not 1. For n<3, the states of the chain are recurrent,


>the "random walker" will eventually return to the origin with probability


 but I am afraid, there
is no elementary proof. Even for N=1 there is a summation
of infinite series, which is calculus. So, I do not have answer,
 but I will post couple links and terms, which I encountered
 looking for the answer,
 which point to off-line sources
(called books) which may be helpful.

                 Probability of eventual return

probability_book/Chapter12.pdf ]

which has proof for n=1 on page 6.
 It is above the level you specified, but
it may give you some feel for what is involved.

Here is a book suggested, which may have proof for n>1
           The Random Walk For Dummies
 ... Rota's book on probability [3]
. The one-dimensional discrete case

and here are couple resources used for undergrad course
which includes your question:

Random walks (aka Markov chains) are easy to program. It may be a good
to try some  numerical experiments.

This may be a way to start (I do not understand the writeup either, but
program is in java)

and here are some program for matlab
(there is a free analog of matlab
called octave) here:
some programs for matlab

Good luck


 when URL is in brackets,like this

[  http:// ....
 ...htm  ]

it means whole multiline webaddress has to be pasted into the browser
(eliminating any white spaces, and the brackets),
 it does not work just clicking on it.


Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  

Google Home - Answers FAQ - Terms of Service - Privacy Policy