Implementing Addition, Multiplication, and so on

CS 301 Lecture, Dr. Lawlor

Arithmetic In Binary

We can do arithmetic in binary or hex by hand just like we do in decimal.  To add, line up the digits, add corresponding digits, and if the per-digit sum is greater than the base, carry to the next digit.  Easy, right?  To subtract, do the exact same thing, but "borrow" from the next digit if the per-digit difference is less than 0.

For example, in binary 01+01=10 because the first digit overflows, and the extra "1" is carried to the next digit.  Similarly, 1111+1=10000 because all the digits overflow in sequence.  In general, adding a "1" will cause carries to ripple up through ones, flipping them to zeros, until it finally reaches a zero, which it will flip to a one.

Addition in hex is exactly the same--it's a bit tricker to add the larger digits involved, but carries work exactly the same.  So  0x8+0x8=0x10, and 0xffff+0x1=0x10000.

Subtraction in binary seems really bizarre until you remember than 10-1=1 in binary--that is, "1" is the one-less-than-zero digit in binary; just like "9" is the corresponding one-below-zero digit in decimal.  So when you have to borrow, you just flip the zeros to ones until you hit a one. 

Multi-Precision Arithmetic

You can do register-register addition in a single instruction (add).  But if you want to add numbers bigger than the registers, you need to break up the addition into register-sized pieces, ordered from lowest value to highest.  You'd also have to make sure the carry bits will actually pass between the pieces.  There's a handy instruction called 'adc' (add with carry) that looks just like add, but it reads the (semi-secret) 'carry flag' bit from the flags register.  Two add a two-int multiprecision number, then, you'd

    add loA,loB
    adc hiA,hiB

This way, the carry bit coming out of the low addition would be read by the higher addition.

To do this "for real", there are several excellent libraries, such as GMP.

Fast Exponentiation Trick

Say you want to compute x to the power y.  This is just x*x*x*...*x, with a total of y copies of x:

The obvious way to do this is really quite slow for big values of y:
    int prod=1;
    for (int i=1;i<=y;i++) prod*=x;
    return prod;

But it's *way* faster to compute x raised to the powers of two by repeated squaring, then combine those powers of two to get x to the y.  For example, you can compute x to the 16th by squaring x four times, like so:
    int x2=x*x;
    int x4=x2*x2;
    int x8=x4*x4;
    int x16=x8*x8;

Harkening back to our bitwise operators, we can just decompose y into the corresponding powers-of-two of x, by looking at the bits of y:
/* Return x raised to the power y */
double mypow(double x,int y)
{
double prod=1; /* will hold x to the y power */
double xpow=x; /* will take powers of x */
for (unsigned int bit=0;bit<sizeof(int)*8;bit++) {
int mask=(1<<bit);
if (y&mask) prod=prod*xpow; /* include this power of x */
if (y<mask) break; /* no higher powers of x included in y */
xpow=xpow*xpow; /* find next higher square */
}
return prod;
}

(Try this in NetRun now!)


For example, raising 2 to the 100th power takes only 8 iterations with this "fast exponentiation" method, but over 100 iterations with slow exponentiation.

The fast exponentiation trick applies to all sorts of stuff--and there's an exact analog for multiplication!