Digital Binary Computation with Fingers and Circuits

CS 301 Lecture, Dr. Lawlor

So here's normal, base-1 counting on your fingers.  In base 1, you just raise the number of fingers equal to the value you're trying to represent:
  • To represent two, raise two fingers.
  • To represent six, raise six fingers.
  • To represent 67, grow more fingers.

Base-1 computation on a human hand: all fingers count as 1 unit
This is funky base-2 counting on your fingers. Each finger represents a different value now, so you have to start counting with '1' at your pinky, then '2' with just your ring finger, and '3=2+1' is pinky and ring finger together. '4' is a single raised middle finger. Then '5=4+1' is middle finger and pinky, and so on. Just 10 digits actually allows you to count all the way to 1023, but we'll ignore the thumbs and just use 8 fingers, to count up to 255=128+64+32+16 (left hand palm-up, pinky is 16) +8+4+2+1 (right hand palm-down, pinky is 1).
  • To represent one, raise the 1 finger.
  • To represent three, raise the 2 and 1 fingers together.
  • To represent ten, raise the 8 and 2 fingers together.
  • To represent twenty, raise the 16 (left pinky) and 4 fingers.
  • To represent 67, raise the 64 (left middle finger), 3, and 1 fingers.

This is actually somewhat useful for counting--try it!

(Note: the numbers four, sixty-four, and especially sixty-eight should not be prominently displayed.  Digital binary counting is not recommended in a gang-infested area.)
Base-2 computation on a human hand: finger values are 8, 4, 2, and 1
Counting on your fingers is "digital" computation--it uses your digits!

Digital vs Analog

Counting on your fingers also uses "digits" in the computational sense; digital storage uses discrete values, like the fingers which are either up or down.  A 25%-raised pinky finger does not represent one-quarter, it represents zero!  This means your fingers can bounce around a bit, and you can still tell which number you're on.  Lots of systems are digital:
The other big way to represent values is in analog.  Analog allows continuous variation, which initially sounds a lot better than the jumpy lumpy digital approach.  For example, you could represent the total weight of sand in a pile by raising your pinky by 10% per pound.  So 8.7 pounds of sand would just by an 87% raised pinky.  8.6548 pounds of sand would be an 86.548% raised pinky.   Lots of systems are also analog:
Note that in theory, one pinky can represent weight with any desired degree of precision, but in practice, there's no way to hold your pinky that steady, or to read off the pinky-height that accurately.  Sadly, it's not much easier to build a precise circuit than it is to build a precise pinky.

In other words, the problem with analog systems is that they are precision-limited.  To store a more precise weight, your storage device must be made more precise.  Precision stinks.  The real world is messy, and that messiness screws up electrical circuits like it screws up everything else (ever hear of clogged pipes, fuel injectors, or arteries?).  Messiness includes noise and the gross term "nonlinearity", which  just means input-vs-output is not a straight line--more on that later!

Indeed, it's always possible for us to make our systems more precise.  The only problem is cost.  For example, here's a review of some excellent, shielded, quality AC power cables for audio equipment.   These cables supposedly pick up less noise than ordinary 50-cent AC plug.  But the price tag starts at $2500 for a 3-foot length!

Luckily, digital systems have far fewer noise problems.  To gain precision in a digital system, you don't have to make your digits better, you just add more digits.  This quantity-instead-of-quality approach seems to be the dominant way we build hardware today.

How many levels?

OK.  So digital computation divides the analog world into discrete levels, which gives you noise immunity, which lets you build more capable hardware for less money.  The question still remains: how many of these discrete levels should we choose to use?
Two levels is the cheapest, crappiest system you can choose that will still get something done.  Hence, clearly, it will invariably be the most popular!

For a visual example of this, here's a little TTL inverter-via-NAND circuit:
trivial TTL 7400 circuit

Here's the chip's input-vs-output voltage curve, measured using the "Input" and "Output" wires shown above.
oscilloscope trace, TTL 7400 chip

The Y axis is voltage, with zero thorough five volts shown.  The X axis is time, as the input voltage sweeps along.  Two curves are shown: the straight line is the input voltage, smoothly increasing from slightly negative to close to 5 volts.  The "output" curve is high, over 4v for input voltages below 1v; then drops to near 0v output for input voltages above 1.3v.  High voltage is a "1"; low voltage is a "0".  So this circuit "inverts" a signal, flipping zero to one and vice versa.  Here's the digital input vs output for this chip:

Here's the trace of another chip of the same type.  Note the curve isn't exactly the same!
oscilloscope trace, 7400 chip

These two chips don't behave in exactly the same way, at least seen from the analog perspective of "how many volts is it sending out?".  But seen from the digital perspective, 0 or 1, they're identical!

Note in both cases the input-vs-output curve is highly nonlinear--the output isn't simply proportional to the input.  If you're designing an analog circuit, nonlinearity is "bad", because it means the output isn't a precise duplicate of the input, and the circuit has lost signal quality (think of the rattling base thumping down the street!).

Digital Logic Operations

OK, so you're convinced that using 0's and 1's is a good idea.  We've seen how to represent numbers with 0's and 1's (digital binary counting).  What else can you do?  Well, AND, OR, NOT are all "logic operations", which in C/C++/C#/Java can be accessed via "bitwise operators" like &, |, and ~.  Here's what they do:

C/C++ symbol
Useful for...
Output is inverse of input.
Building bit masks (~0 is all ones!)
Turning bits backwards.
Computing negative values.
Output is 1 only if both inputs are 1.
Zeroing out unwanted parts of an input ("masking").
One-bit multiplication!
Output is 1 if either input is 1.
Sticking together parts of input data.
Output is 1if either input is 1, but not both!
Controlled bitwise inversion: x^y inverts bits of y where x is 1, but leaves them alone where x is 0.

You can try all of these out in C/C++ right now in NetRun! (executable NetRun link)