Notes on integer-exact computations on graphics card.

I'm putting in a 1D, GL_LUMINANCE8 texture as my basic
lookup table.  Drawing this normally and doing a glReadPixels
reads back the *exact* same integer pixel values, which is nice.

So ordinary draws and reads compute the totally sensible:
	int ordinary(int in) {return in;}

Doing a scaling by 0.5 is a bit more interesting-- I get back:
	00 01 01 02 02 
and by 0.25 I get:
	00 00 01 01 01 01 02
so the graphics card seems to be computing:
	int scale(int in,double by) {
		double v=in*by;
		int l=floor(v);
		if ((v-l)>=0.5) return l+1; /* round up */
		else return l; /* round down */
	}

Similarly, a fragment program that just does a texture lookup
and returns the color as x*1+0 does nothing.  Scaling is the
same as in the fixed-function pipeline. Adding to itself or 
multiplying by 2 also gives exact, integer results.

Bottom line: the graphics card numerics seem stable and sensible--
at least good enough to do pixel processing.

Unknowns include the scale factor for mipmapping (where does this
come from?) and getting pixel-exact results from GL_LINEAR.
