OpenGL Handles, and how Stuff lives on the GPU

CS 481 Lecture, Dr. Lawlor

(This is a sort of meta-introduction, where you're supposed to replace "Thingy" with an actual specific object like "Texture", "Program", "Buffer", and so on as we'll discuss below.)

Here's the deal with thingys.  You allocate an OpenGL handle to an on-graphics card thingy by calling a gen function:
    void glGenThingys(int n_thingys, GLuint *thingy_handle);
Here's a typical call:
    static GLuint myThingy=0;
    if (myThingy==0) {
       ... set up your new Thingy ...

Once you've made a thingy, to use it you need to bind it as the current thingy.  All subsequent thingy-related calls will then apply to your thingy.  The default, fixed-function OpenGL operation uses handle zero, so it's good practice to bind thingy zero back before you exit.  The bind call is usually something like:
    glBindThingy(myThingy); // start using myThingy
    ... render using your Thingy ...
    glBindThingy(0); // back to fixed-function OpenGL

Once bound, you need to set up your new thingy.  OpenGL provides about fifty functions for setting up your thingy, of the form:

Eventually, you may need to delete your thingy.  There's a corresponding glDeleteThingys call to free up that space on the graphics card.  However, I can't recommend creating and deleting thingys every frame--creating and deleting any of these objects is usually fairly expensive (like milliseconds), so it's faster to create things once and re-use them many times.

Texture == Thingy

A Texture is a 1D, 2D, 3D, or cubemap array of color pixels.  It's implemented as a solid rectangular block of pixels sitting in GPU memory.   Texture state includes how to handle out-of-bounds pixels (GL_TEXTURE_WRAP_axis), how to shrink and enlarge the texture (GL_TEXTURE_MIN_FILTER/MAG_FILTER), and so on.

Textures are kinda weird in that you can have several different textures bound at once, to different texture "units", which are numbered 0 through some small integer.  You have to "activate" a texture unit before binding a texture to it.  Typical GPUs nowadays support 4-8 texture units.
	static GLuint monkeyTex=0;
if (monkeyTex==0) { /* first-time initialization */
glGenTextures(1,&monkeyTex); /* make a texture handle */
glActiveTexture(GL_TEXTURE3); /* we'll bind to texture unit 3 */
readTextureFromFile("monkey.bmp"); /* pixels go into monkeyTex */
/* Texture state applies to the currently bound texture */

... bind up a GLSL program
/* The monkey texture is bound to texture unit 3 */
... In GLSL:
uniform sampler2D myMonkey;
... vec4 t = texture2D(myMonkey,vec2(monkeyCoords));

Vertex Buffer Object == Thingy

Vertex Buffer Objects (VBOs) are a way to store geometry information on the graphics card, just like Textures let you store raster information on the graphics card.  This is part of the ARB_vertex_buffer_object extension.

A VBO describes a series of glVertex (and optionally glColor, glNormal, and glTexCoord) calls.  The parameters for the calls start in a CPU array, and get copied into graphics card memory.

You create a Vertex Buffer Object with (guess what!) glGenBuffersARB.  You then have to glBindBufferARB the buffer, and then you can then copy data in with glBufferDataARB.    You describe what your data contains using calls to glVertexPointer (and optionally glColorPointer, glNormalPointer, and glTexCoordPointer), which each take the same four parameters:
Here's how you'd create a vertex buffer object to store vertex locations and colors, then render it:
	static GLuint vb=0; 
if (vb==0) { /* set up the vertex buffer object */
glGenBuffersARB(1,&vb); /* make a buffer */
/* Copy our vtx array (on the CPU) into our new GPU buffer */

/* Tell OpenGL how our array is laid out */
  glVertexPointer(3,GL_FLOAT,sizeof(vtx[0]), (void *)0); /* is first thing in struct */
glColorPointer (3,GL_FLOAT,sizeof(vtx[0]), (void *)12); /* myVertex.color starts 12 bytes after struct start */
glBindBufferARB(GL_ARRAY_BUFFER_ARB,0); /* back to plain OpenGL */
/* Draw all our (GPU) points.
This is way faster than looping over vtx and calling glVertex many times! */
You can also create an "element buffer" to store vertex indices.  For example, to make a triangle from vertices zero, seven, and thirteen, you'd put {0,7,13} into an element buffer.  Element buffers allow many triangles to point to the same vertex, which saves that vertex many trips through your vertex shader.  You upload the index data with glBufferDataARB (just like vertex buffer objects), and then use glDrawElements to look up your indices into your (already bound) vertex array:
	static GLuint eb=0; 
if (eb==0) { /* set up the element buffer object */
glGenBuffersARB(1,&eb); /* make a buffer */
/* Copy our idx array (on the CPU) into our new GPU buffer */
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB,0); /* back to plain OpenGL */
glBindBufferARB(GL_ARRAY_BUFFER_ARB,vb); /* vertex data */
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB,eb); /* index data */

Framebuffer Object == Thingy

A "Framebuffer Object" (FBO) is a place you can render stuff.  This consists of a color texture, an optional depth texture (for the depth buffer), and possibly other weirder things like stencils (one byte per pixel, used for certain shadow algorithms) or multiple render targets (one fragment shader, many output colors!).  The gory details are in EXT_framebuffer_object.

Framebuffer Objects are a handy way to do offscreen rendering which allows:
As usual, you make a new Framebuffer Object with glGenFramebuffersEXT, bind in a Framebuffer Object with glBindFramebufferEXT, and then glFramebufferTexture2DEXT can attach texture objects (as raw GLuint handles!) to the current framebuffer.  You can then reset the rendering size with glViewport, and start rendering away!  Note that you can also render a few things, switch the destination buffer with glFramebufferTexture2DEXT, and then render more stuff; this "buffer swap" is actually a bit faster than binding in a new framebuffer object.
	/* Framebuffer output texture */
static GLuint frameTex=0;
int w=256,h=256; /* size of our texture */
if (frameTex==0) {
glGenTextures(1,&frameTex); /* make a texture handle */
GL_RGBA8, w,h, /* data format and size (pixels) */
0,GL_LUMINANCE,GL_FLOAT,0); /*<- no data needed, just size */
glGenerateMipmapEXT(GL_TEXTURE_2D); /*<- most cards *require* all mipmap levels to be present! */

/* Framebuffer object */
static GLuint fbo=0;
if (fbo==0) { /* set up framebuffer object */
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0); /* back to normal */

/* Render into our framebuffer object */
printf("Framebuffer object is unhappy! Oh no!\n");
glDisable(GL_DEPTH_TEST); /* or else attach a depth texture to GL_DEPTH_ATTACHMENT_EXT! */

... rendered stuff will go into the frameTex texture now! ...

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0); /* back to normal rendering */

glGenerateMipmapEXT(GL_TEXTURE_2D); /* build mipmaps of rendered data */

GLSL Program == Thingy (kinda)

A compiled GLSL program is a rather unusual OpenGL object in a few ways:
	static GLhandleARB p=0;
if (p==0) {
GLhandleARB vo=glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
GLhandleARB fo=glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);
glShaderSourceARB(vo,1,&"//This is my vertex shader ... GLSL here ...",NULL);
glShaderSourceARB(fo,1,&"//This is my fragment shader ... GLSL here ...",NULL);
glCompileShaderARB(vo); glCompileShaderARB(fo); /* FIXME: error check! */
glAttachObjectARB(p,vo); glAttachObjectARB(p,fo);
glLinkProgramARB(p); /* FIXME: error check! */
glDeleteObjectARB(vo); glDeleteObjectARB(fo); /* don't leak memory! */
/* render stuff with our GLSL code here! */
glUseShaderObjectARB(0); /* Back to ordinary OpenGL */