Hi, grigri9ga:
The original proof that a classic random walk (here typified as the
wanderings of an Ant) returns to its origination point with
probability 1 in one and twodimensions, but with probability less
than 1 in all higher dimensions, is due to George Pólya (1921).
A relatively elementary treatment of Pólya's theorem is available in a
Carus Monograph by Peter G. Doyle and J. Laurie Snell:
[Random Walks and Electrical Networks (Carus Mathematical Monographs, No 22)]
http://www.book.nu/0883850249
This is a longer version of Peter Doyle's thesis:
[Application of Rayleigh's shortcut method to Pólya's recurrence problem]
http://math.dartmouth.edu/~doyle/docs/thesis/thesis/thesis.html
and it also is freely available on the Web (under the terms of the GNU
General Public License). See:
[arXiv:math  Random Walks and Electric Networks]
http://front.math.ucdavis.edu/math.PR/0001057
where you can download the monograph in Postscript, PDF, and various other formats.
There are exact formulas for the "probability of return" in dimensions
d > 2. See for example the formulas here:
[Pólya's Random Walk Constants]
http://mathworld.wolfram.com/PolyasRandomWalkConstants.html
and in a bit greater depth here for d = 3:
[Mathsoft Constants: Pólya's Random Walk Constant]
http://www.mathcad.com/library/Constants/polya.htm
References: G. Pólya, Über eine Aufgabe der
Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Stassennetz,
Math. Annalen 84 (1921) 149160.
****
I recall an avantgarde scifi story in which the author told first
the ending of the story, then the middle, and finally its beginning.
The Reader may see this math proof as presented in a similar fashion,
but I'm simply trying to tie up all the loose threads properly. I
leave the judgement of success on that up to you.
Our Ant's steps mark equally spaced points along a line. Picking one
of these to be the origin (zero), we number the remaining points
forward as 1, 2, 3, etc., and those behind as 1, 2, 3, and so
forth. Wherever the Ant is at one time, the next step goes forward or
backward, each with one half probability and with complete
independence from the Ant's prior travels.
Define the "little p" probability p(k;t) as the chance of the Ant
being in location k at time t. Of course the actual value of this
probability will depend on our assumptions about where the Ant was
previously. We might, for example, specify as one "initial condition"
the Ant to be in location k=0 at time t=0. Let's us regard this for
the moment though as merely one of many possible intial conditions
that can be specified and look for the general principles which relate
"little p" at earlier times to later times.
"Big P" probability P(k;t) we define as a conditional probability:
given that the Ant is in location k at time t, what chance does it
subsequently have to reach the origin 0 (either _at_ time t or after)?
It will turn out that P(k;t) = 1 for all k and t. Once we show that,
our proof is done. Here's why:
If the Ant starts at the origin at time t = 0, then at time t = 1 the
Ant either is at k = 1 or else at k = 1. Those two cases are equally
likely, disjoint, and symmetric. Combining them gives us the desired
result:
Pr( an Ant who starts at origin eventually returns there )
= Pr( Ant goes to k=1 at t=1 ) * P(1;1)
+ Pr( Ant goes to k=1 at t=1 ) * P(1;1)
= (0.5 * 1) + (0.5 * 1)
= 1
QED
****
In the middle of the proof we show P(k;t) = 1. Here's what we will
know, once we get there, about P(k;t):
a) P(k;t) is a (conditional) probability, so it's between 0 and 1.
b) P(0;t) = 1 (as the Ant thus reaches the origin).
c) For nonzero k, we have these equations:
P(k;t) = 0.5 * ( P(k+1;t+1) + P(k1;t+1) )
which will be explained shortly.
Think for a moment about how P(k;t) will depend on time t. Is there
any difference in the Ant's ultimate prospect of reaching the origin
from location k depending on whether the Ant is there at time t or
some other time?
The answer is no, because whenever the Ant is at that location, an
eternity of future steps await. Thus we drop the apparent dependence
of P(k;t) on time and write more simply that:
P(k) = 0.5 * ( P(k+1) + P(k1) ) for all nonzero k,
and also that P(0) = 1.
Now the case when k < 0 is just the mirror image of when k > 0, and as
was discussed at one point, the Ant, having committed to one side of
the origin, cannot reach the other side without passing back to the
origin. So we can stipulate by symmetry that for all k:
P(k) = P(k)
and then focus solely on the cases k > 0.
Let's do a clever bit of algebra and rewrite for k > 0 the equation above as:
2 * P(k) = P(k1) + P(k+1)
2 * P(k)  P(k1) = P(k+1)
P(k)  P(k1) = P(k+1)  P(k)
This tells us that the difference between P(k) in two consecutive
locations is always the same (by induction if we want to be thorough).
In other words the values P(0), P(1), P(2), etc. form an arithmetic
progression because the consecutive terms have a common difference.
If we call that common difference D, then by the formula for an
arithmetic progression and by the fact that the inital value P(0) = 1,
then:
P(k) = 1 + kD for all k > 0
What are the possible values for D? If D > 0, then we'd have:
P(1) = 1 + D > 1
and that's not possible (for a probability). Furthermore if D < 0,
then eventually for large enough k, we'd have:
P(k) = 1 + kD < 0
(namely when k > 1/D), and again it's impossible to have a negative probability.
So the only possible value for D is 0, and thus P(k) = P(k;t) = 1 for
all k and all t.
****
Great, we've worked our way back to the beginning. Just a few points
to clear up and we'll be done with the recurrence proof for one
dimension.
Recall that the Ant moves forward (resp. backward) with probability
1/2. So if the Ant is in location k at time t, then it has 1/2 chance
of being in location k+1 at time t+1 and the other 1/2 chance of being
in location k1 then.
Look at it from the other perspective. The chance that an Ant will be
in location k at time t+1 is equal to half of the chance that it was
in location k+1 at time t plus half of the chance that is was in
location k1 at that time. In terms of an equation for the "little p"
terms:
p(k;t+1) = 0.5 * p(k+1;t) + 0.5 * p(k1;t)
This relationship allows us to determine the probability distribution
for the Ant's location at time t+1 from the similar distribution of
location at time t, or by repetition to go forward in time say from
t=0 to any future step.
Here a mathematician might introduce matrix multiplication as a way of
representing generally the "transition probabilities" from one
location to another. The probability distributions for the Ant's
location would be expressed as a kind of vector v_t (infinitely long,
but with only finitely many nonzero entries and thus well behaved as
far as arithmetic goes). The "Markov process" matrix M which would
multiply such a vector, giving:
v_{t+1} = M v_t
would itself then need infinitely many rows as well as columns (both
extending indefinitely in both directions!). We sketched such a
possibility in the earlier Clarifications and mention it now only for
insight. We can proceed to establish the necessary properties of
P(k;t) directly from the equations above.
Keeping in mind the definition of P(k;t), the probability of reaching
the origin (now or in the future) if the Ant is in location k at time
t, the specialness of case k = 0 (the Ant is already at the origin)
should be apparent. There is no "circularity" of the proof in saying:
P(0;t) = 1
though it is natural I think to be suspicious of this point. The
above merely states that if the Ant is in location 0, it will reach
the origin (then or later). It is not about "returning" to the
origin, because as a conditional probability, it assumes only
knowledge of the Ant's location at time t. Any prior position is
moot. The issue of return to the origin will only be addressed in our
final argument, where we have the Ant go first either to location 1 or
1. It is from there that the Ant (with probability 1) "returns" to
the origin at a future step.
But what about P(k;t) for nonzero k? A little thought should convince
us that here P(k;t) must obey _almost_ the same relation as cited
above for the "little p" terms. The change may seem subtle, but
consider the Ant's prospects for reaching the origin if in location k
at time t. Since the Ant has "yet" to reach the origin for nonzero k,
we consider that the next step will take the Ant to location k+1 or
k1, with probability 1/2 each. Thus:
P(k;t) = 0.5 * P(k+1;t+1) + 0.5 * P(k1;t+1)
To restate this observation in words, the chances that Ant will reach
the origin are one half times the chances of reaching from location
k+1 at the next time step plus one half times the chances of reaching
from location k1 at the next time step. Note that here the time t
appears to run in reverse to its appearances in the "little p"
equations, with t+1 on the right hand side of the equations rather
than on the left.
With this observation the equations c) needed in the middle of our
proof are established for nonzero k. We have come "full circle" on
the one dimensional proof.
****
The two dimensional random walk is more delicate, but not beyond
giving a fairly deep insight based on secondary school mathematics.
For example, where in the one dimensional case our analysis hinged on
an arithmetic progression, in the two dimensional case the facts can
be boiled down to the divergence of a harmonic series.
The evening has grown a bit late, and I must beg the Reader's
indulgence as I take some rest now. I will return ???
mathtalkga 
Clarification of Answer by
mathtalkga
on
31 Jan 2004 11:05 PST
Often it both simplifies a proof and improves our understanding of it
if we spot how what we first set out to prove is really only part of a
larger truth.
Here, for example, we set out to prove that if the Ant starts at the
origin, then the probability of returning to the origin is 1.
We wound up proving more than that, namely that no matter where the
Ant starts, the probability of reaching the origin is 1. That's the
implication of P(k) = 1 for all k, since P(k) is simply the
conditional probability of reaching the origin given that the Ant is
(initially) in location k.
This makes sense because the Ant, even if blessed with memory and
"happy" at the return to its old stomping grounds, doesn't rely on
that memory at all in its random movements. So in fact we are really
showing that the Ant's chances of going from point a to point b are
always 1 (given an indefinite amount of time).
But there's actually another useful way to "extend" the theorem
without proving anything more, and that's by way of counting how many
times the Ant will visit the origin. Let's count the initial starting
point as visiting once. Then the number of times the Ant visits the
origin is (with probability 1) at least 2.
But if the Ant makes 2 visits, why not 3? or 4? or infinitely many?
Bingo. What we have actually proved (or almost so, perhaps a few more
words are needed) is that the Ant is expected to visit the origin an
infinite number of times.
And surprisingly it may be a bit easier to prove that the Ant will
visit the origin infinitely often than just to prove it returns once.
In any event it provides a path to settling the cases in higher
dimensions.
****
Let V be the random variable that counts the total number of times the
Ant visits the origin, given that the Ant starts there. Define the
expected value of V:
v = E(V) = SUM n * Pr( V = n ) FOR n = 1 to oo
To connect this with our previous work, we need to define:
u = Pr( an Ant who starts at origin eventually returns there )
Although this was our central focus before, we somehow never defined
it as an unknown! But once we have it, we can make this connection:
Pr( V = n ) = (1  u) * u^(n1)
That is, counting the initial starting point as one visit, the chance
of visiting exactly n times is the product of the chances of returning
n1 times and then times the chance of never returning, or in other
words "escaping".
Plugging this expression for Pr(V = n) back into the equation for v:
v = SUM n (1  u) u^(n1) FOR n = 1 to oo
Although this looks a bit ungainly, it can be simplified by way of
geometric series to this:
v = 1/(1  u)
This relationship is actually valid in ALL dimensions. If we look
back over this brief section, we made no mention of the number of
dimensions, and the computation generally establishes a reciprocal
relation between the expected number of times v that the Ant will
visit the origin and (1  u), the probability of "escaping" (to
infinity and beyond, to borrow Buzz Lightyear's phrase).
Simply put, u = 1 if and only if v is infinite.
In one and two dimensions the value of v is infinite (and the value of
u is correspondingly 1). In three or more dimensions the value of v
is finite, and the value of u strictly less than 1. Said another way,
in more than two dimensions, there is a positive probabilty of escape,
(1  u) > 0.
****
Let's set about calculating v in two dimensions. As promised earlier,
we will make our way to a harmonic series, which is known to diverge
to infinity, and this will prove v is infinite (therefore u = 1).
In case the divergence of the harmonic series is not one of the facts
that comes readily from the Reader's store of mathematical knowledge,
let's review.
"The" harmonic series is this:
1 + 1/2 + 1/3 + 1/4 + ... = SUM 1/n FOR n = 1 to oo
but we can properly apply the term harmonic series to the summation of
reciprocals of any (nonzero) arithmetic progression.
These always diverge, although there is an interesting and useful
variant which converges (conditionally), the alternating harmonic
series:
1  1/2 + 1/3  1/4 + ... = ln(2)
But let's keep our noses to the grindstone, shall we? Why does "the"
harmonic series diverge? (Other harmonic series can then be shown to
diverge by virtue of a "comparison" to this one.)
The partial sums of the harmonic series are monotonically increasing,
because all its terms are positive. Therefore showing that a
subsequence of thse partial sums tends to infinity, shows also that
the entire series diverges to infinity (unconditionally).
The idea is simply to group terms like this:
1 + (1/2) + (1/3 + 1/4) + (1/5 + 1/6 + 1/7 + 1/8) + ...
Each group of terms, when inwardly combined, contributes at least 1/2.
Since there are infinitely many of these, the summation is infinite.
****
We need an alternative formula for v to show that it is infinite,
because the one we have is already "spent" in providing the
relationship to u.
Let's revise the "little p" notation for the two dimensional case and
employ two space coordinates j,k:
p(j,k;t)
Furthermore, to set the stage for our calculations, we assume:
p(0,0;0) = 1
p(j,k;0) = 0 if otherwise (j,k not both zero)
Now the alternative formula we want is this:
v = SUM p(0,0;n) FOR n = 1 to oo
Intuitively this says that, to the average count v of all the times
that the Ant visits the origin, each p(0,0;n) contributes a
probability at time t=n that the Ant would be visiting then. This
intuition can be made rigorous by expressing the random variable V as
the sum of (infinitely many) random variables which count only whether
the Ant visits the origin at time t=n. As v is the expected value of
V, so too are the probabilities p(0,0;n) the expected values of the
"boolean" function which is 1 if the Ant visits the origin at time t=n
and is 0 otherwise.
****
Because the Ant starts at the origin, it happens that after an odd
number of steps the Ant will always be someplace else. If a return to
the origin is going to take place, it must occur in an even number of
steps. We therefore narrow our focus to the cases t = 2n.
At each step there are 4 equally likely outcomes in two dimensions, so
to figure out p(0,0;2n) we need to sort out which of the 4^(2n)
outcomes would bring the Ant back to the origin.
The requirement is that there have to be an equal number of "up" steps
as "down" steps, and also an equal number of "left" steps as "right"
steps.
How many of the 4^(2n) outcomes satisfy these conditions? A familiar
formula from combinations and permutations comes to our rescue. If
all the steps were (somehow) distinguishable, then there would be
(2n)! arrangements of them. But if we have k "up" steps, then there
must be k "down" steps as well as (nk) "left" and (nk) "right"
steps, and each different kind of step forms an interchangeable group.
So the answer is:
SUM (2n)! / ( k! k! (nk)! (nk)! ) FOR k = 0 to n
Thus, since each distinct outcome has probability 1/4^(2n):
p(0,0;2n) = 4^(2n) SUM (2n)! / ( k! k! (nk)! (nk)! ) FOR k = 0 to n
Amazingly this can be simplified to:
p(0,0;2n) = 4^(2n) * C(2n,n)^2
where C(2n,n) means "combinations of 2n things taken n at a time":
C(2n,n) = (2n)!/ (n! n!)
I'll omit such simplification for now as there's actually a cute
shortcut to obtain this in two dimensions from one, which I'll explain
shortly.
****
All that remains is to connect the terms p(0,0;2n) with the terms of a
harmonic series.
To do this requires estimating C(2n,n), and for this we appeal to
Stirling's approximation of the factorial:
n! ~ SQRT(2*pi*n) e^(n) n^n
[Stirling's Approximation]
http://hyperphysics.phyastr.gsu.edu/hbase/math/stirling.html
[Stirling's Approximation (with error terms)]
http://ece.colorado.edu/~bart/book/stirling.htm
A proof of Stirling's approximation lies in the realm of calculus, as
does a justification that this is "good enough" for our purposes. But
I think it fits in reasonably well with a secondary school math
background, because bright students are naturally curious about
rapidly the factorial function grows, and this estimate makes a
precise connection between the factorial and exponential growth.
Now let's tackle our estimate:
C(2n,n)/(4^n) = (2n)! / ( n! n! 4^n )
SQRT(4*pi*n) e^(2n) (2n)^(2n)
~ 
(2*pi*n) e^(2n) n^(2n) 4^n
= 2 * SQRT(pi*n) / (2*pi*n)
= 1/SQRT(pi*n)
Almost done now:
p(0,0;2n) = (C(2n,n)/(4^n))^2
~ ( 1/SQRT(pi*n) )^2
= 1 / (pi*n)
Thus our summation for v would be "like" the harmonic series:
v = SUM p(0,0;2n) FOR n = 1 to oo
~ SUM 1 / (pi*n) FOR n = 1 to oo
= (1/pi) SUM 1/n FOR n = 1 to oo
Recall from the earlier discussion that since v is infinite, it means
the probability of "escape" (1  u) is zero, and thus the probability
of return u is 1.
****
I want to close our discussion of the twodimensional random walk but
pointing out that, by a clever change of perspective, it is identical
to a matched pair of onedimensional random walks.
The Reader has been visualizing (I hope) the lattice on which the Ant
wanders in two dimensions in a fashion like this:
  
***
  
*0*
  
***
  
where the origin is suggestively labelled 0.
Rotate your perspective by 45 degrees, and the picture becomes:
\ /
*
\ / \ /
* *
\ / \ / \ /
* O *
/ \ / \ / \
* *
/ \ / \
*
/ \
From this point of view each of the Ant's steps is both an up/down
choice and a left/right choice, and the vertical and horizontal
motions are independently random.
In one dimension the number of ways to return to the origin in n steps
is clearly C(2n,n), ie. an equal number of steps in both directions.
The probability of doing so is thus:
p(0;2n) = C(2n,n)/(2^(2n))
This shortcut makes it clear that the twodimensional probability of
being at the origin at time t = 2n is then the product of the
respective probabilities in both dimensions, i.e.
p(0,0;2n) = (C(2n,n)/(4^n))^2
just as we'd previously claimed.
****
Again I must apologize for running short of time, but I will continue
next with a discussion of the threedimensional case and above. The
forecast is for a great deal of handwaving, so bring your raincoats
and rubbers (galoshes) for protection.
regards, mathtalkga
