You may know that Binary codes are a base2 encoding of numbers. This
particular means of carrying and manipulating numbers can be done with
simple on and off (1 and 0, voltage or no voltage) representations.
This is very powerful and easier to make happen with little or no
error.
When inputs occur with devices that generate binary, the digits that
make up the number are interpreted all at once. If some device, say...
a binary position encoder, were to move slowly (picture a knob for
volume on a purely digital media device) but one of the digits were to
be interpreted just mere nanoseconds before the other one, either
because of the digital input mechanism, or because the digits change
on the knob (position encoder) at only a very slightly different time
-- well, then the possibility exist for the resulting number to be
temporarily a wildly different value.
Here's an example:
Say the encoder was currently outputting this value:
0 0 1 1 = 3 decimal
and it changed from 3 to 4, but the 3rd digit changed to one before
the second changed to zero...
0 1 1 0 = 5
.. then it settled to the correct value
0 1 0 0 = 4
Because the binary encoding changes more than one digit at the same
time, the intermediate results could be incorrect by a fairly large
amount. The example shown above is only wrong by 2, then quickly by 1.
But, try this one:
0 1 1 1 = 7
Intermediate value?
1 1 1 1 = 15 OUCH!
Notice that some sequential numbers change only one digit at one time
but others change more. Here's a table:
Value starting Bits changing Value mistake
binary to next value from intermediate
0 0 0 0 1 0
0 0 0 1 2 1
0 0 1 0 1 0
0 0 1 1 3 3
0 1 0 0 1 0
0 1 0 1 2 1
0 1 1 0 1 0
0 1 1 1 4 7
1 0 0 0 1 0
1 0 0 1 2 1
1 0 1 0 1 0
1 0 1 1 3 3
1 1 0 0 1 0
1 1 0 1 2 1
1 1 1 0 1 0
1 1 1 1 4 7
Wow. Actually, I showed worst case - when more than one bit changes
esp. more than two you can't tell which will change at which time or
how likely a mistake is to be caught, but this sort of problem (called
asynchronous curcuit theory) says that in most systems you will ALWAYS
see this sort of mistake and always the worst case at some time.
OK, well what if EVERY transition could be encoded somehow so that
only ONE digit changed between every sequential two numbers?? That
would be GREAT for our problem! What about this?? (I used only a
three-bit code because I'm lazy)
Binary Gray Code
0 0 0 becomes 0 0 0
0 0 1 becomes 0 0 1
0 1 0 becomes 0 1 1
0 1 1 becomes 0 1 0
1 0 0 becomes 1 1 0
1 0 1 becomes 1 1 1
1 1 0 becomes 1 0 1
1 1 1 becomes 1 0 0
Notice how each change is only one bit, and thus can be interpreted
correctly at the moment of transition. Even if the transition is
missed, the error is only one bit total.
This code has many uses, beyond real-world asynchronous digital
inputs, extending well into mathematics and code theory. But hopefully
it's easier now to tell what the codes are for.
David Smith
Austin Texas |