Simpler WebGL Shaders with gpu1D

CS 493 Lecture, Dr. Lawlor

This week I finally figured out enough of how THREE.js does offscreen rendering and shading to wrap them in a little library, which I call gpu1D.js.  Unlike most JavaScript, it actually has comments!

This is a JavaScript library, but if it were C++ the declarations would be:
/* A gVector1D is a 1D array, implemented internally as a 2D texture.
You can draw it onscreen like a 2D texture, or pass it to a gFunction1D as a 1D array.
class gVector1D {
gVector1D(int newSize,float *newValues=null);

int size; // number of elements in our 1D vector
THREE.WebGLRenderTarget tex; // the texture that stores our data
// vec1.swap(vec2) interchanges the guts of the two vectors efficiently.
// This is commonly used for ping-pong type operations.
void swap(gVector1D &other);

// Show this texture onscreen, for debugging
void debugTo(THREE.Scene &scene,float size=10.0,float xPos=-size,float yPos=+size,float zPos=+size);

/* A gFunction1D is some GPU code, which can be executed into a vector.
You set uniforms to provide the input values and textures.
class gFunction1D {
Pass in the GLSL code that forms the guts of this function.
This needs at least one function named "run", with signature:
vec4 run(float index) { ... }
"index" is your integer index (from 0 to size-1) in the target vector.
The return vec4 will be written into that vector.
gFunction1D(const char *fragShaderGLSLcode);

/* Set a uniform variable to this value.
This automatically adds a uniform of the right type to your GLSL code.
It currently only works with values of type gVector1d, float, or vec2-3-4.

Returns this gFunction1D, so you can string together set calls in a fluent style.
template <class T>
gFunction1D &set(const char *uniformName,T value);

/* This executes the pixel shader.
Compilation is delayed until this stage during the first run,
so we can collect info about your uniform variables.

targetVector is a gVector1D where the output of your shader will be written.

runLength is an integer pixel count giving the minimum number of pixels to render.
It's optional: leave it off to run at the full targetVector.size.
gFunction1D &run(gVector1D &targetVector,int runLength=targetVector.size);
Generally speaking, you create gVector1D objects to store your application data, and gFunction1D objects to manipulate that data.  For example, here's how we might create and initialize an array:
   var P=new gVector1D(10000); // P has 10 thousand elements
var f=new gFunction1D(" vec4 run(float index) { return vec4(index*scale); } ");
f.set("scale",3.0); // set a uniform named "scale" (used above) to three; // run f for every pixel in P, and put the resulting values into the P texture
Note that "set" and "run" return the gFunction1D object back to you, so you can string together calls in the "fluent programming" style:
  new gFunction1D(" vec4 run(float index) { return vec4(index*scale); } ")
.set("scale",3.0) // set a uniform named "scale" (used above) to three
This runs the same as above, but it's a single statement.

I'm particularly proud of my scheme for accessing other textures.  Calling fn.set("name",vec) binds "name" to a GLSL function that looks up pixels from vec.  For example, you could make a new texture Q with each pixel equal to one plus the corresponding pixel in P like this:
  var Q=new gVector1D(P.size); // Q is the same size as P
var fq=new gFunction1D(" vec4 run(float index) { return 1.0+yoP(index); } ");
fq.set("yoP",P); // read from P (as "yoP" function); // write to Q


Creating textures on the GPU is expensive (hundreds of microseconds), so typically you do all your allocation in "Setup".  Compiling gFunction1D objects is also expensive (milliseconds), so you should create them once and call them repeatedly during the simulation.

You can read from as many vectors as you like (just call "set" repeatedly), but one big limitation of GLSL is can only write to a single vector at a time.  And you can't even control where you're writing in that vector--you always write to your "index", not some place of your choice.  The reason for these limitations in parallelism: reads can happen in any order, get cached, and still be fast and correct; but writes in the wrong order or at the wrong time give you the wrong answer.  Thus, GLSL doesn't give you control over writes.

For this reason, you won't always get the right answer if a shader reads from (via set) and writes to (via run) the same vector.  A common trick is to have two versions of the vector, "old" and "new", where you read from "old" and write to "new".  You can then swap the two for the next step, a trick often called "ping pong the buffers".


02_28_simple1D: one vector, named "P" and one function, named "fnP".
02_28_crazy1D: Does a trippy fract-based animation.
02_28_crazy_mesh: shows how to extract gVector1D positions in a vertex shader to make a 3D mesh.
02_28_particles1_move: GPU particle simulation with vectors for "P" position and "V" velocity.  Particles are connected in a sheet, but they drop through the floor!
02_28_particles2_bounce: includes physics for bouncing the particles off the floor.

(my) Related Work

gpu1D is not the first GPU programming interface I've built.