Google Answers Logo
View Question
Q: Bitwise Operators ( Answered 5 out of 5 stars,   1 Comment )
Subject: Bitwise Operators
Category: Science > Math
Asked by: cosmosofbits-ga
List Price: $100.00
Posted: 18 Aug 2004 21:35 PDT
Expires: 17 Sep 2004 21:35 PDT
Question ID: 389777
Is it possible to represent bitwise OR or bitwise AND on rational
binary numbers between 0 and 1?

For example:
	A =	.001001001001001001?     = 1/7
	B =	.00010001000100010001? = 1/15
Is there a way to represent (A Or B) or (A And B) mathematically
utilizing standard mathematical operators such as {+, -, *, /, sin,
cos, ln, e^x, ?}
Where A and B can be any number between 0 and 1? Or at least any
number with a repetitive pattern?

I understand that (A + B) = (A Or B) + (A And B) but I want to express
(A Or B) mathematically without reference to bitwise operators, or be
convinced that it is not possible.

Thank You

Request for Question Clarification by mathtalk-ga on 19 Aug 2004 05:04 PDT
Hi, cosmosofbits-ga:

First off there's a "bit" of a problem in defining the binary
representation of real numbers between 0 and 1.  Of course every
number can be represented, but some numbers can be represented two
ways, which means that a "bitwise" operation on such a number isn't

Specifically, any nonzero terminating binary fraction can be
represented either with trailing zeros (the terminating form) or with
trailing ones.  That is, if a number > 0 can be represented with just
a finite number of places (after which all the bits are zero), then it
can also be represented with a sequence that is identically 1 after
some point.

I can elaborate this into an explanation (based on discontinuity) of
why you can't represent the bitwise OR (resp. AND) using standard
mathematical operators and functions, if you wish, even if we restrict
ourselves to numbers with a repetitive pattern.

regards, mathtalk-ga

Clarification of Question by cosmosofbits-ga on 19 Aug 2004 06:38 PDT
Good Point
Specificially then numbers between 0 and 1 of the form 1/(2^n)

I appologize for the lack of clarity. Please concentrate on numbers
that take the form above.

Clarification of Question by cosmosofbits-ga on 19 Aug 2004 06:42 PDT
Oops i mean (1/(2^(n-1))) ie: 1/3, 1/7, 1/15, 1/31....

Clarification of Question by cosmosofbits-ga on 19 Aug 2004 06:45 PDT
Bad Morning....


If you can tell my why it is impossible to represent bitwise OR of
numbers of that form using mathematical functions only i will be
deeply satisfied.

Request for Question Clarification by mathtalk-ga on 19 Aug 2004 16:07 PDT
For the restricted case of arguments of the form 1/(2^n - 1) we can
construct a mathematical formula for bitwise A And B (and hence as you
already know, an equivalent formula for A Or B).

To use your original example:

A  =   1/7

B  =   1/15

A And B =  1/4095

A Or B = A + B - (A And B)

Can you see what the general rule is?

regards, mathtalk-ga

Clarification of Question by cosmosofbits-ga on 19 Aug 2004 18:43 PDT
I dont think I understand the general formula.

A Or B = A + B - (A And B)

Is there a way to represnt the right side of the equation above using
mathematical functions? In the restricted case of 1/(2^n-1) for A and
Or is it impossible.

More Background:

The numbers i am actually concerd with are of the form
f(n)=(1/(2^n-1))-(1/2^n) for n>1

For Example:
n=2	=	.00010101010101010101010101...
n=3	=	.000001001001001001001001001...
n=4	=	.0000000100010001000100010001...
n=5	=	.000000000100001000010000100001...
This is the Sieve of Eratosthenes. Where
NOT(f(2) OR f(3) OR f(4) OR f(5)...) = .01101010001010001010001... a
number representing the prime numbers.
So If there were a way to represent (f(x) OR f(y)) mathematically then
i could sieve the integers
with what i think would be called a closed form equation. I think this
is impossible, but i would like to know
specifically if f(a) OR f(b) can be represented using only mathematical terms. 

I appreciate your responses. I appologize for the lack of clarity. I
trully look foward to an answer as i've wondered about this for a few
years now.

Request for Question Clarification by mathtalk-ga on 20 Aug 2004 04:44 PDT
Please see Comment below.

Your note suggests you are interested in some sort of closed form
expression for a prime counting function (or for a formula for the
n'th prime).  Such expressions do exist, although they are not
"practical" ways of computing their results, and there is a section
devoted to this in Ribenboim: "The new book of prime number records"
(Springer-Verlag) and also in Hardy and Wright: "An introduction to
the theory of numbers" (Oxford):

[Is there a formula for the n'th prime?]

Illustrative of these methods is a formula due to Williams which
encodes the prime counting function using "sine" and "factorial" to
effect Wilson's Theorem:

 pi(n) = SUM sin^2(pi*(j-1)!^2/j) / sin^2(pi/j) FOR j = 2 to n

Note that we also require a summation whose length varies with the
input argument n.

regards, mathtalk-ga

Clarification of Question by cosmosofbits-ga on 20 Aug 2004 07:46 PDT

Basically you have answered my question. I believe your next post is an answer.
If there are further clarifacations it is possible for me to post another question.
I am not familiar with how the answer process works..

I realize that with the special case of numbers (1/2^n-1) finding AND
is equivelent to GCD.
Indeed that is why i am interested in the problem.

However, I wanted to frame it in terms of a general formula for bitwise AND or OR.
You reminded me that because a real number between 0 and 1 can be
represented with trailing 0's,
or trailing 1's the bitwise operators are not defined precisely.
From there I got a little sidetracked into more specific domains of numbers.

I am still curious about the possibility of represnting the bitwise AND/OR of any
number between 0 and 1 using mathematical functions only, especially the "basic"
set of mathematical functions.

For instance, one could still ask if there is a way to represnt
(A AND B) or (A OR B) for {A, B between 0 and 1} assuming A and B are
represented in
binary using only trailing 0's.

If one decides beforehand to represnt numbers between 0 and 1 in
binary with trailing 0's only, then
it seems to me that (A AND B) has a definite value. How could one
calculate that value mathematically
from the two numbers A and B? Or is it not possible?

Again. Feel free to post an answer. 
If I am still curious about your response I can open another question
and we can continue
to talk about it.

Clarification of Question by cosmosofbits-ga on 20 Aug 2004 08:34 PDT
After reading "How to price your questions" i have decided to reprice
this question.

I would like to pick up from my previous clarification on Aug20 7:46PDT and 
go back to the general case of any number between 0 and 1.

Thank You for your patience.
Subject: Re: Bitwise Operators
Answered By: mathtalk-ga on 22 Aug 2004 20:13 PDT
Rated:5 out of 5 stars
Hi, cosmosofbits-ga:

Here's the approach I propose to your Question.  The "standard"
mathematical functions mentioned:

sin, cos, ln, e^x, ...

are all in a class called elementary transcendental functions.  These
are "elementary" in the sense of being a sort of simplest instance of
"special functions" (which are themselves characterized as solutions
of a restricted class of ordinary differential equations).

To draw kind of a big "lasso" around everything that one might throw
into the recipe, I'm going to consider all of these as analytical
functions of one variable:

[Analytic Functions and Singularities]

An analytic function is a complex-valued function that has a
derivative, which turns out to be a much stronger notion than the
existence of a real-valued derivative.  In fact if a complex-valued
function has one derivative, it has derivatives of all orders (which
clearly is not the case for a real-valued derivative).

So functions like sine and exponential (which turn out to be very
closely related through their complex values) are quite "smooth". 
When they do have places of not being smooth (singularities), even
these turn out to be remarkably well-behaved.  For example, the
singularities of an analytical function are "isolated" (they don't
pile up together).

A good example of a singularity in an analytic function is the
logarithm "blowing up" at zero.

The singularities of rational functions like 1/x at zero turn out to
be so nicely behaved, when viewed in the complex-plane, that they are
called something special ("poles").  Other than the fact that the
rational functions tend to "infinity" at these points, the right
perspective shows this is actually as "smooth" as any other point of
the function.

Singularities like the logarithm at zero are called essential
singularities, to emphasize the distinction.

Now strictly speaking you are considering functions of two-variables
(bitwise And, bitwise Or) defined on a "dense" subset of the
real-interval [0,1], specifically those values which have
"terminating" binary expansions.  Analytic functions of two variables
can be much more "complex" (forgive the pun) than those of one
variable.  To keep things simple, I propose to look at the restriction
of one argument to 1/2 = 0.1000000... (binary).  That will be

We will see that the resulting function And(x,1/2) is much too "rough"
to have an analytical function (of one variable) that matches it.

Properties of And(x,1/2)

As a general proposition, And(x,y) is the same as And(y,x).

Also, since no bits are set in And(x,y) unless they are set in x:

  And(x,y) =< x

By symmetry of course it's true that:

  And(x,y) =< y

Now for the specific case y = 1/2 and 1/2 < x < 1, we have:

  And(x,1/2) = 1/2  for all x in (1/2,1)

In other words the function And(x,1/2) is constant on (1/2,1).

For 0 < x < 1/2, the function And(x,1/2) is a different constant:

  And(x,1/2) = 0    for all x in (0,1/2)

So we have a "jump discontinuity" in And(x,1/2) at x = 1/2.

Of course we could give two _separate_ formulas for these two cases,
but what we'll show is that we cannot give a single analytic function
that agrees on both intervals _except_ by splitting their domain of
definition into two cases.

Analytic Continuation

Since an analytic function f(z) has infinitely many derivatives at any
non-singular point z_0, it can be expanded in a power-series, ie. the
Taylor series expansion around that point:

  f(z) = SUM (z-z_0)^k * f^{k}(z_0)/(k!) FOR k = 0,1,2,...

The radius of convergence of such a power series turns out to be the
distance from z_0 to the nearest singularity of f(z), which is a
positive distance away (the isolation of singularities).

The power series uniquely determines the analytic function with those
derivatives in the disk where it converges.

Consequently two analytic functions that agree on an interval must
agee on any disk containing that interval where the corresponding
power-series converges.  This is a simple yet powerful uniqueness
property referred to as analytic continuation:

[Analytic Continuation - Mathworld]

The Contradiction

Suppose f(x) = And(x,1/2) were an analytic function.  Now it agrees
with the constant 1/2 on the interval (1/2,1), and the constant
power-series f(x) = 1/2 converges everywhere in the complex plane, so
f(x) must be equal to 1/2 everywhere.  But on the other hand, it also
agrees with the constant 0 on the interval (0,1/2), and the
power-series f(x) = 0 also converges everywhere!  Thus 1/2 = 0, and
this contradiction shows And(x,1/2) cannot be formulated as an
analytic function.  Therefore And(x,y) cannot be an analytic function
of two variables either.

What Else Can We Try?

The fact that we found a jump discontinuity at y = 1/2 is just the tip
of the iceberg.  There are jump discontinuities at every "terminating"
binary fraction y.  So naturally any class of functions that would be
"powerful" enough to represent the bitwise And will need to  extend
the "standard" mathematical functions with some that allow jump

If we were to introduce a "step function" that has a single jump, say:

        /  0 if x < 0
g(x) = <
        \  1 otherwise

then we could write And(x,1/2) = (1/2) * g(x - 1/2).

However this would only introduce one jump discontinuity, and for all
of And(x,y) we need a countable number of jump discontinuities.  No
finite formula will get us there if we introduce these one jump at a

So the bottom line is that in order to define the bitwise And function
with a "single" formula, we would need a building block that, like the
bitwise And, provides a countable number of discontinuities "at a
single blow".  The standard mathematical functions don't bring us any
closer in that respect.

Please let me know if some part of my discussion needs further Clarification!

regards, mathtalk-ga
cosmosofbits-ga rated this answer:5 out of 5 stars

Subject: Re: Bitwise Operators
From: mathtalk-ga on 19 Aug 2004 22:16 PDT
Hi, cosmosofbits-ga:

Because of uncertainty over what exactly the ... in your description
of allowable mathematical functions covers, I'll post this analysis as
a Comment.

Let A = 1/(2^m - 1) and B = 1/(2^n - 1) where m,n are integers greater than 1.

Then A And B (the bitwise conjunction of their binary expansions) is
again a number of this form:

A And B = 1/(2^k - 1)

where k = LCM(m,n) is the least common multiple of m and n.

To see that this is so, note that the binary expansion of A consists
of 1's every m'th binary place (and zeros elsewhere), and similarly
the binary expansion of B consists of 1's every n'th binary place (and
zeros elsewhere).  The conjunction A And B will then have 1's only in
those positions common to the expansion of both A and B, namely in
every k'th position where k is the least common multiple of m and n.

Additional formulas that are involved are these:

m = log_2(1 + 1/A) = ln(1 + 1/A)/ln(2)

n = log_2(1 + 1/B) = ln(1 + 1/B)/ln(2)

LCM(m,n) = (m*n)/GCD(m,n)

where GCD(m,n) is the greatest common divisor of m and n.

Thus one can write:

A And B = 1/(2^LCM(log_2(1 + 1/A),log_2(1 + 1/B)) - 1)

Once this formula is adopted for And, the related expression follows:

A Or B = A + B - (A And B)

It will be seen from this discussion that the computation of the
bitwise And for the restricted pairs of numbers of the form 1/(2^m -
1) and 1/(2^n - 1) is equivalent (modulo the use of logarithmic and
exponential functions) to the computation of the least common multiple
of m and n (or equivalently, per a formula shown above, the greatest
common divisor of m and n).

So, at a minimum we can say that expressing the bitwise And and Or of
these numbers is equivalent to the ability to evaluate the greatest
common divisor of two positive integers.  [A fairly efficient
algorithm for doing this is known as the Euclidean algorithm because
of a clear description of it in Euclid's Elements.]

Perhaps that is a sufficient answer for some.  Given that you have
struggled with this problem for some time, I'm inclined to think that
you will still be dissatisfied with this.

A precise negative result (that it cannot be done) would require at
least a rigorous defintion of what functions are permitted (and by
extension, which are disallowed).  Also, there is considerably more
difficulty in the theory of functions of two variables than in the
theory of functions of one variable.  To give a simple illustration, a
polynomial of one variable can have at most finitely many isolated
roots.  But polynomials of two variables define "algebraic curves",
the theory of which has occupied mathematicians since the dark ages.

regards, mathtalk-ga

Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.

If you feel that you have found inappropriate content, please let us know by emailing us at with the question ID listed above. Thank you.
Search Google Answers for
Google Answers  

Google Home - Answers FAQ - Terms of Service - Privacy Policy