# Swapping Operations for Speed: Strength Reduction

CS 301 Lecture, Dr. Lawlor

Not every machine language instruction runs at the same speed.  "Heavy" operations, like memory accesses or divide instructions, run substantially slower than "light" operations like addition or register move.
This program measures the time for various primitive operations.
`#include <fstream>int a=10,b=11;double x=2.3, y=3.4;int array[100];int fn_constant(void) {return 0;}int fn_mov(void) {return a;}int fn_index(void) {return array[a];}int fn_add(void) {return a+b;}int fn_sub(void) {return a-b;}int fn_not(void) {return ~a;}int fn_and(void) {return a&b;}int fn_or (void) {return a|b;}int fn_xor(void) {return a^b;}int fn_shl(void) {return a<<b;}int fn_shr(void) {return a>>b;}int fn_addshr(void) {return (a+b)>>b;}int fn_mul(void) {return a*b;}int fn_mad(void) {return a*b+a;}int fn_div(void) {return a/b;}int fn_call(void) {return fn_constant();}int fn_fmov(void) {return (int)(x);}int fn_fadd(void) {return (int)(x+y);}int fn_fmul(void) {return (int)(x*y);}int fn_fmad(void) {return (int)(x*y+x);}int fn_fdiv(void) {return (int)(x/y);}int fn_sqrt(void) {return (int)sqrt(x);}int fn_cos(void) {return (int)cos(x);}int fn_malloc(void) {free(malloc(4)); return 0;}int fn_new(void) {delete new int; return 0;}int fn_vector(void) {std::vector<int> v(a,3); return 0;}int fn_timer(void) {return (int)time_in_seconds();}int fn_ofstream(void) {std::ofstream f("foo.dat"); return 0;}int foo(void) {	print_time("constant",fn_constant);	print_time("index",fn_index);	print_time("mov",fn_mov);	print_time("add",fn_add);	print_time("sub",fn_sub);	print_time("not",fn_not);	print_time("and",fn_and);	print_time("or",fn_or);	print_time("xor",fn_xor);	print_time("shl",fn_shl);	print_time("shr",fn_shr);	print_time("addshr",fn_addshr);	print_time("mul",fn_mul);	print_time("mad",fn_mad);	print_time("div",fn_div);	print_time("fmov",fn_fmov);	print_time("fadd",fn_fadd);	print_time("fmul",fn_fmul);	print_time("fmad",fn_fmad);	print_time("fdiv",fn_fdiv);	print_time("call",fn_call);	print_time("sqrt",fn_sqrt);	print_time("cos",fn_cos);	print_time("malloc",fn_malloc);	print_time("new",fn_new);	print_time("vector",fn_vector);	print_time("timer",fn_timer);	print_time("ofstream",fn_ofstream);	return 0;}`

Here's the output of this program on various machines.  Times are reported in nanoseconds, billionths of a second (1 ns = 10-9 seconds).
 x86-64: Intel Q6600, 2.4GHz x86-32: Intel Pentium 4, 2.8GHz PowerPC: G5, 2GHz ia64: Itanium, 800MHz PowerPC: 604e, 200MHz `constant: 2.92 ns/callindex: 3.34 ns/callmov: 3.33 ns/calladd: 3.34 ns/callsub: 3.33 ns/callnot: 2.92 ns/calland: 3.34 ns/callor: 3.33 ns/callxor: 3.33 ns/callshl: 3.34 ns/callshr: 3.33 ns/calladdshr: 3.34 ns/callmul: 3.34 ns/callmad: 3.33 ns/calldiv: 5.00 ns/callfmov: 2.92 ns/callfadd: 3.33 ns/callfmul: 3.34 ns/callfmad: 3.33 ns/callfdiv: 12.92 ns/callcall: 5.00 ns/callsqrt: 24.18 ns/callcos: 31.69 ns/callmalloc: 36.27 ns/callnew: 41.69 ns/callvector: 54.81 ns/calltimer: 70.47 ns/callofstream: 5760.81 ns/call` `constant: 5.33 ns/callindex: 5.17 ns/callmov: 5.09 ns/calladd: 5.16 ns/callsub: 5.19 ns/callnot: 5.12 ns/calland: 5.16 ns/callor: 5.11 ns/callxor: 5.16 ns/callshl: 5.34 ns/callshr: 5.12 ns/calladdshr: 5.05 ns/callmul: 5.13 ns/callmad: 5.56 ns/calldiv: 16.78 ns/callfmov: 5.14 ns/callfadd: 8.99 ns/callfmul: 9.33 ns/callfmad: 10.92 ns/callfdiv: 16.52 ns/callcall: 9.42 ns/callsqrt: 17.28 ns/callcos: 70.73 ns/callmalloc: 63.86 ns/callnew: 83.28 ns/callvector: 161.34 ns/calltimer: 814.82 ns/callofstream: 13558.66 ns/call` `constant: 23.83 ns/callindex: 26.88 ns/callmov: 16.89 ns/calladd: 19.02 ns/callsub: 17.02 ns/callnot: 17.09 ns/calland: 18.11 ns/callor: 19.19 ns/callxor: 18.35 ns/callshl: 19.09 ns/callshr: 18.08 ns/calladdshr: 19.08 ns/callmul: 18.77 ns/callmad: 17.52 ns/calldiv: 20.85 ns/callfmov: 27.22 ns/callfadd: 17.61 ns/callfmul: 23.83 ns/callfmad: 31.10 ns/callfdiv: 20.87 ns/callcall: 19.51 ns/callsqrt: 35.22 ns/callcos: 72.50 ns/callmalloc: 169.44 ns/callnew: 186.45 ns/callvector: 267.62 ns/calltimer: 134.22 ns/callofstream: 28818.38 ns/call` `constant: 45.14 ns/callindex: 48.92 ns/callmov: 46.42 ns/calladd: 45.16 ns/callsub: 45.15 ns/callnot: 55.17 ns/calland: 45.15 ns/callor: 45.17 ns/callxor: 45.14 ns/callshl: 45.14 ns/callshr: 48.90 ns/calladdshr: 55.18 ns/callmul: 65.22 ns/callmad: 66.47 ns/calldiv: 133.09 ns/callfmov: 61.46 ns/callfadd: 76.61 ns/callfmul: 68.96 ns/callfmad: 68.99 ns/callfdiv: 132.03 ns/callcall: 54.99 ns/callsqrt: 141.82 ns/callcos: 156.80 ns/callmalloc: 421.63 ns/callnew: 441.96 ns/callvector: 371.20 ns/calltimer: 990.96 ns/callofstream: 27243.05 ns/call` `constant: 100.11 ns/callindex: 115.05 ns/callmov: 100.11 ns/calladd: 105.12 ns/callsub: 105.12 ns/callnot: 105.13 ns/calland: 105.13 ns/callor: 105.10 ns/callxor: 105.12 ns/callshl: 105.12 ns/callshr: 105.12 ns/calladdshr: 105.10 ns/callmul: 105.15 ns/callmad: 110.05 ns/calldiv: 195.22 ns/callfmov: 155.09 ns/callfadd: 175.23 ns/callfmul: 175.11 ns/callfmad: 175.20 ns/callfdiv: 320.56 ns/callcall: 165.10 ns/callsqrt: 781.37 ns/callcos: 910.89 ns/callmalloc: 1120.97 ns/callnew: 1245.60 ns/callvector: 617.67 ns/calltimer: 2903.34 ns/callofstream: 27621.16 ns/call`
Here's what we can see generally:
• Almost every integer arithmetic operation (+-*&|^~<<>>) takes almost exactly the same amount of time--under half a nanosecond on typical x86 machines!  In fact, it's difficult to measure the cost of a single arithmetic operation, because the CPU can often overlap simple arithmetic with other instructions.
• The classic case of "strength reduction" was replacing "2*x" with "x+x".  When mul was slow and add was fast, this helped a lot.  Those days seem to be over!
• The one exception is that divides (as well as mod) are pretty slow--over 10ns!  This is way more than any other integer arithmetic operation.
• Array indexing is often very fast, under half a nanosecond.
• Calling a function is pretty slow; typically costing several nanoseconds.
• Especially on older machines, floating point is a little more expensive than integer, a few nanoseconds per operation.  Floating point divides and square roots are expensive, just like integer.  Floating-point cosine is even more expensive--70ns!
• Memory allocation, whether via malloc/free, new/delete, or std::vector, is very expensive--around 100ns!
• Timer (and other operating system) calls are yet more expensive--800ns!
• File operations are insanely expensive--tens of thousands of nanoseconds!
• With slower-clock-rate CPUs, times of most operations scale up close to linearly (10x slower clock rate -> about 10x longer time).
In general, you can often replace expensive operations with cheaper operations.  For example, instead of doing a divide (10ns), you can do an integer bit shift (<1ns) or a floating-point multiply by reciprocal (x/y = x*(1.0/y)).  Instead of making a function call, use "inline" to have the compiler insert the machine code into the caller. Instead of calling "cos", you might eliminate the operation with an algebraic substitution (there are hundreds of useful trig identities), or replace it with a table lookup (since array indexing is very fast).  Instead of calling new/delete every iteration, you might reuse the allocated buffer and postpone the delete to the end of the program.  And anything you can do to avoid opening and closing files repeatedly will dramatically improve performance!

The compiler is a master of this sort of operation replacement.  For example, if you divide by a constant 8, the compiler will put in a bit shift:
`unsigned int a=read_input();const unsigned int b=8;int foo(void) {	return a/b;}`

Giving this assembly for foo:

`mov    eax,ashr    eax,0x3ret `
If you divide by a constant 10, the compiler puts in an amazing fixed-point multiply-by-reciprocal, giving this assembly:
`	 mov	edx,0xcccccccd	 mov	eax,a	 mul	edx	 mov	eax,edx	 shr	eax,0x3	 ret	`
What's happening here?
a/b = a*(1.0/b)
= ( a*(235/b) )/235
= ( a*(235/b) )>> 35
= (a* 0xcccccccd) >> 32 >> 3
"mul" puts the high bits of the product into edx.  By pulling out only the high bits, we in effect right-shift by 32.  We then need to right-shift by 3 more bits to make up for the 235 scale factor.

Rounded to integer, (235/10) == 0xcccccccd.  Doesn't the roundoff cause problems?  In general, yes, but here we just want an integer output, and with the 235 scale factor, every integer has the right output from this "divide"!

However, keep in mind that the compiler can only pull this trick *if* it knows that you're dividing by 10.  If you divide by some unknown value, the compiler can't help but put in a general "div" instruction.

The other thing to keep in mind is that this technique depends on the vagaries of the current circuit implementation of divide.  Different generations of CPUs have different implementations, and in general everything has been getting faster, but slow things have been catching up with fast things.  Once, "mul" was considered slow, but now it's just as fast as add for most operations.  Today, "div" is slow, but this may change in the future.  Benchmark!