General-Purpose Graphical Processing Units

or, Solving Non-Graphics Problems on Graphics Cards

CS 641 Lecture, Dr. Lawlor

So, graphics cards.  As of about five years ago, they became fully-programmable full-fledged computers with:
Graphics cards are now several times faster than the CPU.  How do they achieve this speed? 

It's not because graphics card designers are better paid or smarter than CPU designers, or that the industry is so much bigger:
The difference is that graphics cards run "pixel programs"--a sequence of instructions to calculate the color of one pixel.  The programs for two adjacent pixels cannot interact with one another, which means that all the pixel programs are independent of each other.  This implies all the pixels can be rendered in parallel, with no waiting or synchronization between pixels.

Read that again.  That means graphics cards execute a parallel programming language

Parallelism theoretically allows you to get lots of computing done at a very low cost.  For example, say you've got a 1000x1000 pixel image.  That's a million pixels.  If you can build a circuit to do one floating-point operation to those pixels in 1ns (one billionth of a second, a typical flop speed nowadays), and you can fit a million of those circuits on one chip (this is the part that can't be done at the moment), you've just built a 1,000 teraflop computer.  That's three times faster than the fastest computer in the world, the $100 million dollar, 128,000-way parallel Blue Gene.  We're not there yet, because we can't fit that much floating-point circuitry on one chip, but this is the advantage of parallel execution.

As of 2006, the fastest graphics card on the market renders at least 32 pixels simultaneously.  Values stored at each pixel consist of four 32-bit IEEE floating-point numbers.  This means every clock cycle, the cards are operating on 128 floats at once.  The "LRP" instruction does about 3 flops per float, and executes in a single clock.  At a leisurely 1GHz, the $500 32-pipe nVidia GeForce 8800 thus would do at least:
     3 flops/float*4 floats/pixel*32 pixels/clock*1 Gclocks/second=384 billion flops/second (384 gigaflops)

Recall that a regular FPU only handles one or two (with superscalar execution) floats at a time, and the SSE/AltiVec extensions only handle four floats at a time.  Even with SSE, the Pentium 4 theoretical peak performance is about 15 gigaflops, but I can't get more than about 3 gigaflops doing any real work.

Graphics Card Programming (for graphics)

Back in 2002, if you wanted to write a "pixel program" to run on the graphics card, you had to write nasty, unportable and very low-level code that only worked with one manufacturer's cards.  Today, you can write code using the (C++-like) GL Shader Language (GLSL), and run the exact same code on your ATI and nVidia cards, on your Window machine, Linux box, or Mac OS machine. 

The high-level languages available today are:
Here's a very simple GLSL program.  This is the main function that runs for each pixel.  It's got one return value, the color of the pixel ("gl_FragColor" in GLSL).  The datatype of a color is "vec4", which consists of four floats: red, green, blue, and alpha (transparency).  So this code renders every pixel red:
void main(void) {
gl_FragColor=vec4(1,0,0,1);
}

(Try this in NetRun now!)

Here's a program that renders each pixel in a color that corresponds to its location onscreen (its "texture coordinate").
void main(void) {
gl_FragColor=vec4(texcoords.x,texcoords.y,0,0);
}

(Try this in NetRun now!)

Here's a more complicated GLSL program, that uses each pixel's texture coordinates to set up a Mandelbrot Set iteration.
vec2 c=vec2(3.0,2.0)*(texcoords-0.5)+vec2(0.0,0.0); /* constant c, varies onscreen*/
vec2 z=c;
/* Mandelbrot iteration: keep iterating until z gets big */
for (int i=0;i<15;i++) {
/* break if length of z is >= 4.0 */
if (z.r*z.r+z.g*z.g>=4.0) break;
/* z = z^2 + c; (where z and c are complex numbers) */
z=vec2(
z.r*z.r-z.g*z.g,
2.0*z.r*z.g
)+c;
}
gl_FragColor=fract(vec4(z.r,z.g,0.25*length(z),0));

(Try this in NetRun now!)

Graphics Card Programming (for non-graphics)

This is all well and good for drawing pictures, but there are lots of other problems out there that don't involve pictures in any way.  Or do they?  All computation is just data manipulation, and we can write *anything* in a pixel shader--floats are floats, after all!

Deep down, the GPU supports a fairly small number of primitives:
So you could interpret the first GLSL program as corresponding to this non-graphics code:
	for (int i=0;i<n;i++) array[i]=myStruct(1.0,0.0,0.0,0.0);
And so on.  The only annoying part is that though a pixel program can read from any location on any texture it likes, it can only write to its own pixel.  And no, you can't bind the same texture for both reads and writes.  Hey, that's because RAR is not a dependency, but WAR/RAW/WAW is!

I've written a little set of classes that can be used to simplify graphics card programming (which is normally packed with OpenGL calls, and kinda messy/ugly).  They're linked off the 481 page: 481_gpgpu--Perform interesting non-graphics computations on the graphics card. Download: .zip w/exe (636K) .tar.gz (571K)