Hi, cosmosofbits-ga:
Here's the approach I propose to your Question. The "standard"
mathematical functions mentioned:
sin, cos, ln, e^x, ...
are all in a class called elementary transcendental functions. These
are "elementary" in the sense of being a sort of simplest instance of
"special functions" (which are themselves characterized as solutions
of a restricted class of ordinary differential equations).
To draw kind of a big "lasso" around everything that one might throw
into the recipe, I'm going to consider all of these as analytical
functions of one variable:
[Analytic Functions and Singularities]
http://www.ee.cooper.edu/courses/course_pages/97_Fall/EE114/complex/node2.html
An analytic function is a complex-valued function that has a
derivative, which turns out to be a much stronger notion than the
existence of a real-valued derivative. In fact if a complex-valued
function has one derivative, it has derivatives of all orders (which
clearly is not the case for a real-valued derivative).
So functions like sine and exponential (which turn out to be very
closely related through their complex values) are quite "smooth".
When they do have places of not being smooth (singularities), even
these turn out to be remarkably well-behaved. For example, the
singularities of an analytical function are "isolated" (they don't
pile up together).
A good example of a singularity in an analytic function is the
logarithm "blowing up" at zero.
The singularities of rational functions like 1/x at zero turn out to
be so nicely behaved, when viewed in the complex-plane, that they are
called something special ("poles"). Other than the fact that the
rational functions tend to "infinity" at these points, the right
perspective shows this is actually as "smooth" as any other point of
the function.
Singularities like the logarithm at zero are called essential
singularities, to emphasize the distinction.
Now strictly speaking you are considering functions of two-variables
(bitwise And, bitwise Or) defined on a "dense" subset of the
real-interval [0,1], specifically those values which have
"terminating" binary expansions. Analytic functions of two variables
can be much more "complex" (forgive the pun) than those of one
variable. To keep things simple, I propose to look at the restriction
of one argument to 1/2 = 0.1000000... (binary). That will be
sufficient.
We will see that the resulting function And(x,1/2) is much too "rough"
to have an analytical function (of one variable) that matches it.
Properties of And(x,1/2)
========================
As a general proposition, And(x,y) is the same as And(y,x).
Also, since no bits are set in And(x,y) unless they are set in x:
And(x,y) =< x
By symmetry of course it's true that:
And(x,y) =< y
Now for the specific case y = 1/2 and 1/2 < x < 1, we have:
And(x,1/2) = 1/2 for all x in (1/2,1)
In other words the function And(x,1/2) is constant on (1/2,1).
For 0 < x < 1/2, the function And(x,1/2) is a different constant:
And(x,1/2) = 0 for all x in (0,1/2)
So we have a "jump discontinuity" in And(x,1/2) at x = 1/2.
Of course we could give two _separate_ formulas for these two cases,
but what we'll show is that we cannot give a single analytic function
that agrees on both intervals _except_ by splitting their domain of
definition into two cases.
Analytic Continuation
=====================
Since an analytic function f(z) has infinitely many derivatives at any
non-singular point z_0, it can be expanded in a power-series, ie. the
Taylor series expansion around that point:
f(z) = SUM (z-z_0)^k * f^{k}(z_0)/(k!) FOR k = 0,1,2,...
The radius of convergence of such a power series turns out to be the
distance from z_0 to the nearest singularity of f(z), which is a
positive distance away (the isolation of singularities).
The power series uniquely determines the analytic function with those
derivatives in the disk where it converges.
Consequently two analytic functions that agree on an interval must
agee on any disk containing that interval where the corresponding
power-series converges. This is a simple yet powerful uniqueness
property referred to as analytic continuation:
[Analytic Continuation - Mathworld]
http://mathworld.wolfram.com/AnalyticContinuation.html
The Contradiction
=================
Suppose f(x) = And(x,1/2) were an analytic function. Now it agrees
with the constant 1/2 on the interval (1/2,1), and the constant
power-series f(x) = 1/2 converges everywhere in the complex plane, so
f(x) must be equal to 1/2 everywhere. But on the other hand, it also
agrees with the constant 0 on the interval (0,1/2), and the
power-series f(x) = 0 also converges everywhere! Thus 1/2 = 0, and
this contradiction shows And(x,1/2) cannot be formulated as an
analytic function. Therefore And(x,y) cannot be an analytic function
of two variables either.
What Else Can We Try?
=====================
The fact that we found a jump discontinuity at y = 1/2 is just the tip
of the iceberg. There are jump discontinuities at every "terminating"
binary fraction y. So naturally any class of functions that would be
"powerful" enough to represent the bitwise And will need to extend
the "standard" mathematical functions with some that allow jump
discontinuities.
If we were to introduce a "step function" that has a single jump, say:
/ 0 if x < 0
g(x) = <
\ 1 otherwise
then we could write And(x,1/2) = (1/2) * g(x - 1/2).
However this would only introduce one jump discontinuity, and for all
of And(x,y) we need a countable number of jump discontinuities. No
finite formula will get us there if we introduce these one jump at a
time!
So the bottom line is that in order to define the bitwise And function
with a "single" formula, we would need a building block that, like the
bitwise And, provides a countable number of discontinuities "at a
single blow". The standard mathematical functions don't bring us any
closer in that respect.
Please let me know if some part of my discussion needs further Clarification!
regards, mathtalk-ga |