CS 481 Lecture Notes



Week of 2005/04/18-22: Global Illumination Effects and Algorithms

See my directory of global illumination effects in real life.

Pure OpenGL handles only light directly from the light source to the geometry, where it must reflect with either pure diffuse (all directions equally) or phong specular (normal dot halfway vector raised to a power).  Nothing else is supported.

With pixel shaders, it's easy to extend OpenGL rendering to support any kind of local reflectivity at a surface.  One example of this is our (diffuse along X; specular along Y) lighting texture.  More generally, you want to support arbitrary "BRDF"s (Bidirectional Reflectance Distribution Functions) at a surface.  If you only consider point light sources, this is pretty easy to do.

However, light doesn't just come from a small set of point light sources--it reflects off everything in the scene.  In particular, it can reflect of diffuse objects (diffuse indirect illumination), shiny objects ("caustics"), or even propagate through solid objects (subsurface scattering) or the space between objects (fog, volume rendering).

"Cliff's Notes" to global illumination algorithms:
    1. Shoot a few hundred thousand photons from the light source.  Follow the photons as they reflect off shiny or diffuse surfaces, but also keep a record of the photon at that surface.  So far, it's just photon tracing.
    2. Shoot eye rays from the eye, just like ordinary ray tracing or path tracing, but once they hit geometry, estimate the lighting by the surface's local photon density. 
Examples of global illumination in computer graphics:
 I've also written a little global illumination system based on a conetracing-like idea, but for polygons.  This is about the limit of scene complexity the system can handle at reasonable speed!
Sunlight
Sunlight Sunlight
"Sun" light: small orange light source
Sun and sky together
"Sky" light: big blue source


Week of 2005/04/11-15: Radiometry Terminology

The terms we'll be using(almost continuously) are:
See the Light Measurement Handbook for many more units and nice figures.

To do global illumination, we start with the radiance of each light source (in W/m2/sr).  For a given receiver patch, we compute the cosine-weighted solid angle of that light source, in steradians.  The incoming radiance times the solid angle gives the incoming irradiance (W/m2).  Most surfaces reflect some of the light that falls on them, so some fraction of the incoming irradiance leaves.  If the outgoing light is divided equally among all possible directions (i.e., the surface is a diffuse or Lambertian reflector), for a flat surface it's divided over a solid angle of pi (cosine weighted) steradians. This means the outgoing radiance for a flat diffuse surface is just M/pi (W/m2/sr), if the outgoing irradiance (or radiant exitance) is M (W/m2).

To do global illumination for diffuse surfaces, then, for each "patch" of geometry we:
  1. Compute how much light comes in.  For each light source:
    1. Compute the cosine-weighted solid angle for that light source.
    2. Compute the radiance of the light source in the patch's direction.
    3. Multiply the two: incoming irradiance = radiance times weighted solid angle.
  2. Compute how much light goes out.  For a diffuse patch, outgoing radiance is just the total incoming irradiance (from all light sources) divided by pi. 
For a general surface, the outgoing radiance in a particular direction is the integral, over all incoming directions, of the product of the incoming radiance and the surface's BRDF (Bidirectional Reflectance Distribution Function).  For diffuse surfaces, where the surface's radiance-to-radiance BRDF is just the cosine-weighted solid angle over pi, this is computed exactly as above.



Week of 2005/03/28-2005/04/01: Raytracing on the Graphics Card

The basic idea in all raytracing is to write down an analytic expression for a point on a ray (just a line starting from a point), P(t)=S+t*D; plug this into the analytic expression for a surface, like a sphere's equation distance(P,C)=r; and then solve for t, the location along the ray that hits the object.  Here, S is the starting location of the line, D is the direction of the line, and C is the center of the sphere, all of which are vectors.  We can rearrange distance(P,C)=r as distanceSquared(P,C)=r2, or (P-C)2=(P-C) dot (P-C)= r2, or (t*D+(S-C))2=t2(D dot D)+2t(D dot (S-C))+(S-C))2=r2, which is just a quadratic equation in t.  We can thus easily solve for t, which tells us the location along the line that hits the object.  If there is no solution for t, the ray misses the object.

To raytrace spheres, then, we just start lines at the eye (so S is the eye location), choose directions D pointing into each pixel, solve for the ray parameter value t using the quadratic equation, find the actual intersection (or "hit" point) P=S+t*D, and shade and render using this hit point.  All of this can be done from inside a pixel shader, which has the nice effect of allowing us to draw spheres using just one quad--the depth and color variation are all computed via raytracing, and are analytically accurate.

In software, we normally find the color down one ray by hit-testing all possible geometry, and computing the color based on the closest (smallest hitting t) geometry.  This makes it easy to find the color in any direction, which lets us easily compute true reflections (just shoot a ray in each direction), or compute true shadows (if a ray makes it from you to the light source without hitting anything, you're lit).  It's not clear how to implement this "geometry search" raytracing on the graphics card, although people have done so using one long texture that lists all (relevant) geometry.

But because in general every pixel could hit every piece of geometry, with classic raytracing we've got O(pg) work to do to render p pixels showing g pieces of geometry.  OpenGL feed-forward style rendering, where you first think about a piece of geometry, then look at the pixels it covers, is normally O(p+g): there's a certain amount of work to do per pixel, and a certain amount per piece of geometry (vertex setup).  For a scene with p=1 million pixels and g=100,000 polygons, the difference between O(pg) and O(p+g) is substantial--with equal constants, feedforward rendering is 100,000 times faster.  Of course, smart raytracers try to optimize away most hit tests using a spatial search structure (bounding volume trees, grids, etc.), so the advantage of being able to find light in any direction might only slow down rendering by a factor of ten or so.


Weeks of 2005/03/21-27: Geometry and Terrain Algorithms
We talked about, and looked at, various file formats for 3D objects.  Most formats boil down to lists of two simple things:
Internally, programs often need complicated processing on their 3D objects: for example, a program using shadow volumes needs to extract shadow silhouettes; while programs with lots of geometry need to be able to decimate (simplify) and enrich (complexify) their models.  To do this, programs often build a "mesh data structure"--a set of polygons, edges, and vertices with enough interlinking to allow a programmer to easily find the neighboring mesh pieces around a polygon, edge, or vertex.

(Note: there is no such word as "Vertice" or "Matrice".  The singular version of "Vertices" and "Matrices" are "Vertex" and "Matrix".  This has been a public service announcement.)

Terrain images (elevation maps) are particularly well-suited to an expand-on-demand data structure.  I outlined Lindstrom's terrain method, which is a way of expanding a terrain represented as a quadtree so that: the screen error is bounded, and there are no cracks in the mesh.  Google for "Lindstrom Terrain" or check my thesis for details.


Weeks of 2005/02/16-27: Shadow algorithms

Shadows are an important aspect of our perception of a scene.  Without shadows, objects seem to "float" above the ground, because we can't relate the position of the objects and the ground. No shadows

No Shadows

Unrealistic Shadows

Lots of totally fake shadow algorithms exist.  With these algorithms, objects never cast shadows on themselves or other objects, just a pre-defined "ground" object.
Decal shadows: draw something dark (usually an alpha-blended textured quad) on the ground plane beneath each object.
  • Advantages:
    • Very easy and fast.
    • Can be made to work on non-planar surfaces (sort of).
    • Trivially allows soft shadows, by just using a blurry decal texture. This soft shadow can be more attractive than hard shadow maps/volumes.
  • Disadvantages:
    • Shape of shadow is not related to the shape of the object.  At all.  This is clearly silly, since a teapot should cast a teapot-shaped shadow, not a fixed splooch.
    • It can be tough to avoid z-buffer fighting when drawing the decal.  One method is to draw the ground, turn off Z buffer tests and writes, draw the decals, then draw everything else.  Another method is to draw the decals normally, but shift their geometry slightly off the ground.
shadow maps

Decal Shadows

Projected shadows: squish actual world geometry down onto ground plane, and draw it dark.
  • Advantages:
    • Very easy: just need a projection matrix to squish geometry to plane.
  • Disadvantages:
    • Where two objects intersect, you'll draw dark geometry twice, which gives ugly darker regions where two shadows overlap.
    • Difficult to make it work on non-planar surfaces.
    • Doubles amount of geometry.
    • Same z-buffer problems as with decal shadows.
Projected shadows

Projected Shadows

Projected shadows with stencil: like projected shadows, but avoids drawing the same part of the ground twice by first rendering the geometry to the stencil buffer, then drawing where the stencil is set. stencil shadows: stencil buffer
Stencil shadows: color

Stencil buffer
Projected w/ Stencil Shadows

Somewhat realistic

These algorithms can cast shadows from curved objects onto themselves or each other.  The big remaining limitation is that the shadows have hard edges--they're shadows cast by a point light source.
Shadow maps: render world from point of view of light source, storing the depth to the topmost (and only lit) object.  When rendering, check the depth to the light source: if on top of the shadow map, you're lit, if beneath, you're in the dark.
  • Advantages:
    • Works with curved surfaces and shadow receivers.
    • Easy enough: render world from light source is just another render.
  • Disadvantages:
    • Depth is discretized--must choose resolution and coverage (almost never good enough!)
    • Draw everything twice (once from light source, again from camera).
Shadow Map Depth Image
Shadow map: depth image
Shadow Map Color Image
Shadow map: color image

Stencil shadows: color


Shadow Map Shadows

Shadow volumes: extrude ``shadow silhouettes'' from the boundary of each object (as seen from the light source), and render those silhouettes onscreen.  For conventional shadow volumes, increment the stencil buffer on the *front* faces, and decrement on the *back* faces.  After doing this, the stencil buffer contains the number of shadows entered but never exited--this is hence a count of the number of shadows each piece of geometry is in.  If you're inside at least one shadow (i.e., your stencil buffer value is nonzero), you are hence in shadow.


Shadow Silhouettes
Shadow Volume Silhouette Buffer Values
Shadow Volumes

Shadow Volume Silhouettes
Shadow Volume Stencil Buffer Totals
Shadow Volume Shadows

Carmack Shadow Volumes: just like normal shadow volumes, but when drawing the silhouettes, invert the sense of the depth test, and increment *back* faces and decrement *front* faces.  This has the advantage that it even works when the camera starts off inside a shadow volume.

Actually realistic

Real distributed light sources cast soft shadows.  There are many algorithms out there to compute soft shadows.  The oldest method is to cast many rays to the light source--the fraction of rays that reach the light source gives the approximate fraction of the light source that is visible, and hence the fraction of light that reaches the point.  For simple scenes (e.g., one spherical lightsource and one occluding sphere or polygon), it's actually possible to analytically compute this fraction of illumination; for complicated scenes, we really don't have much hope of getting shadows exactly right--because occluding geometry can act like a pinhole camera, and hence expand an abitrarily complicated shadow pattern from any point in space.

Modern methods to approximate soft shadows include a shadow volume-like "silhouette wedge" method by Assarson, an extended shadow map generation technique by Chan and Durand called Smoothies, and my shadow map generation technique called penumbra limit maps.  All of these have advantages and disadvantages, which are still being sorted out in the literature and in research labs.


2005/2/7-2/11 (week): Fragment Programs (or Pixel Shaders)

Graphics cards have recently gained the ability to run an arbitrary program as they render each pixel.  Variously called fragment programs (OpenGL) or pixel shaders (DirectX), these are a new and powerful tool that can be used for graphics and beyond.

The relevant OpenGL extensions (and links to the standards documents) are GL_ARB_vertex_program and GL_ARB_fragment_program.  To read an OpenGL extension, read the "Overview", skip over the "Issues" section, skim "New Procedures" and "New Tokens", stare in horror at the legalese in the "Additions" section, then find some sample code.  Once you're comfortable with the sample code, search for the appropriate snippets in the standards document. I've posted my little ARB_vertex/fragment_program example "frag" to the Examples page. This includes a number of sample vertex and fragment shaders in the "program/" subdirectory.

The important things to remember about per-pixel programming are:
Fragment shaders are only supported on the very latest graphics cards (and with the very latest drivers), and hence can be somewhat buggy.  For example, Mesa 6.2 (or CVS as of 2005/2/10) incorrectly parses the string "0.1" as if it had value 1.0.  I've fixed this (and a related parsing bug) in my own version of Mesa, which I've installed on the Chapman lab machines in ~ffosl/mesa.  To change your LD_LIBRARY_PATH to point to this version, just run
    [ffosl@ws04 demos]$ source ~ffosl/mesa.sh
and then run your OpenGL program.  This is fairly slow, but is a good way to try vertex and fragment shaders out.


2005/1/31-2/4 (week): Texturing

Textures during the 1990's were a simple way to put an interesting pattern of color on a surface, by appropriately stretching a 2D color image across each polygon.  Today, textures are used to store many different sorts of information, from basic color to complicated lookup tables for lighting calculations. 

Textures are applied to a polygon by specifying "texture coordinates", locations in the source texture, at each vertex of the polygon.  In OpenGL, texture coordinates are measured as a fraction of an image, not as a number of pixels, to make it easy to change the texture resolution without changing texture coordinates.

The basic sorts of textures supported by today's hardware are:
Textures are stored in a "Texture Object" (a GLuint), which allows you to have several textures loaded at once.  Of the textures, exactly one (per multitexture unit--see below) is always loaded as the "current" texture object.  Texture objects are created with glGenTextures, set as the current texture with glBindTexture, and deleted with glDeleteTextures.  Once you create and bind a texture object, you upload the pixel data using glTexImage[1,2,3]D.  A typical call is
	glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,rgbaData);
If you turn on mipmapping, you *must* upload all the mipmap levels, or else your texture will show up as white.  In addition to creating and binding the texture, unless you're using programmable shaders, you must call glEnable(GL_TEXTURE_2D) to have the texture show up on your geometry.

It's also possible to have several textures *displayed* at once; this is called "multitexture".  You switch to the i'th multitexture unit (e.g., 0, 1, 2...) by calling
	glActiveTextureARB(GL_TEXTUREi_ARB);
Subsequent calls to any texture routine, in particular "glBindTexture", then affect the i'th texture unit.  This is most useful with fragment shaders, but there's a (complicated, inflexible) interface called GL_ARB_multitexture that can control how multiple textures get rendered. (We didn't and won't cover this in class.)

There are a bunch of ways to draw a texture, with more and more quality but more and more cost.  You set up the rendering method for magnification (screen image bigger than texture) or minification (texture bigger than screen projection) using calls like glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR).
When texture coordinates are bigger than the texture, several things can happen, depending on the texture "wrap mode", which is controlled on each of the S, T, R, and Q texture coordinate axes via calls glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT).
Both the minification and wrap modes are associated with the texture *object*, not with OpenGL state.  This means you can set up, e.g., a 1D texture object using GL_NEAREST and GL_CLAMP, a 3D texture using GL_LINEAR and GL_REPEAT, and a 2D texture using GL_LINEAR_MIPMAP_LINEAR and GL_CLAMP_TO_EDGE, and they'll each keep their settings.  All these settings also apply to access via a fragment shader as well!


2005/1/28 (Friday): OpenGL Camera and Lighting

We covered the OpenGL camera, transform, and lighting model, including their many flaws.  Luckily, with vertex and fragment shaders, all this stuff is now up to you, so you can build it any way you want.

Random aside: some humans have 4-color vision (Readable Guardian Article on tetrachromatism).  Others have only 2-color vision (color blind).  In the dark, we're all 1-color (monochrome).

2005/1/26 (Wednesday): OpenGL/GLUT Philosophy and Basics

There are a zillion OpenGL tutorials online.  The basic stuff we covered (double buffering, the depth buffer, GLUT interfacing, the OpenGL pipeline) is pretty well trodden. 

The most annoying aspect of writing OpenGL is that if you set a bit of state (for example, GL_DEPTH_TEST), it stays set until you explicitly turn it off.  Worse, there's so much state (lots of it rarely used) that it's tough to figure out exactly when you should bother to turn stuff on and off, and when you should just set it once and forget it.  There's no efficient or convenient way to just save and restore everything (glPushAttrib is neither efficient nor convenient).

One interesting fact I found after class is how to read back matrices from OpenGL.  It can be done using
	double mtx[16]; /* OpenGL modelview matrix (column-major order!) */
glGetDoublev(GL_MODELVIEW_MATRIX,mtx);

2005/1/24 (Monday): Vectors, dot, and cross products

We covered basic 2D and 3D vectors, including dot products and cross products. Vectors are basically all we ever deal with in computer graphics, so it's important to understand these pretty well.

In particular, you should know at least:
You should pick one of the thousands of "Vector3d" classes on the web to represent vectors in your code, or you'll go insane writing "glSomething3f(foo_x,foo_y,foo_z)" (instead, you'll be able to say "glSomething3fv(foo)").

2005/1/21 (Friday): Introduction to Computer Graphics

Computer graphics work in general boils down to a small number of categories.

O. Lawlor, ffosl@uaf.edu
Up to: Class Site, CS, UAF