CS 381  >  Week 12 Review Problems

CS 381, Fall 2003
Week 12 Review Problems

Below are some problems related to the material covered in class recently. Answers are in the Answers section, below. Do not turn these in.

You can expect the Final Exam (on Wednesday, December 17) to contain problems similar to some of these.

Problems

  1. In the context of CG, what do we mean by a “buffer”?
     
  2. In this class, we have dealt with three OpenGL-provided buffers. a. Name them. b. In which OpenGL buffer(s), if any, are alpha values stored? c. In which OpenGL buffer(s), if any, are vertex coordinates stored? d. In which OpenGL buffer(s), if any, are depth values stored?
     
  3. One might expect that an OpenGL implementation would store its color buffer(s) in one block of memory, its depth buffer in another, etc. But this is usually not the case. a. Explain how a typical OpenGL implementation will store its buffers. b. Give an advantage of this storage method over the “obvious” method mentioned above.
     
  4. How do applications typically store images in memory?
     
  5. a. What does BITBLT stand for, and what is it? b. Why does thinking in terms of BITBLT when designing CG hardware allow for efficient implementation of some rendering operations? c. How does this suggest that the “fragment operations” block in the OpenGL-pipeline diagram is not an entirely honest representation of what really happens internally?
     
  6. a. Is glCopyPixels affected by fog settings? b. What about glReadPixels? c. Give a quick explanation of how we can know the answers to parts a & b.
     
  7. a. What is OpenGL’s “unpack alignment”? b. How can it make your life miserable if you don’t deal with it?
     
  8. We studied three (or four?) techniques that come under the heading of “mapping”. a. Name each and briefly describe what it is (and/or how it works). b. Which mapping technique is difficult to implement in unextended OpenGL, and why?
     
  9. a. Bump mapping lacks realism in one very significant way. What is it? b. Therefore, what kind of bumps is bump mapping best at making?
     
  10. Texturing is a way to add cheap detail to a scene. What is meant by “cheap”, and why is such detail cheap?
     
  11. In texture mapping, “the image and geometry pipelines work together”. Explain.
     
  12. Inside a glBegin-glEnd pair, should texture coordinates be specified before or after the corresponding vertex coordinates? Why?
     

Answers

  1. In CG, a buffer is a place in memory (main or graphics hardware) where an image or image-like data can be stored. Specifically, it is a block of memory, arranged as a 2-D array.
    1. We have dealt with the front and back color buffers and the depth buffer. (OpenGL has other buffers: left and right color buffers for stereo rendering, a stencil buffer, and accumulation buffer, etc.)
    2. Alpha values are stored in the various color buffers, along the R, G, and B color components.
    3. None. Vertex coordinates are not stored in OpenGL buffers.
    4. Depth values are stored in the depth buffer (thus the name).
    1. OpenGL implementations typically store all of their buffers in the same block of memory. Essentially, there is a single 2-D array of pixels. For each pixel there is a “packed” struct holding its color values, depth value, etc.
    2. One advantage of this method is that each pixel read/write can be done with a single instruction. Masking can be used to determine which buffers are read or written.
  2. Applications generally store images as 2-D arrays of colors. These may actually be 3-D arrays of color-component values. For example, we generally stored images as 3-D arrays of GLubyte’s, where the third array dimension is 3 (for R, G, and B).
    1. BITBLT is bit-block transfer or bitwise block transfer. It refers to the operation of grabbing a rectangular block of pixels from an image (in a buffer or array) and copying it to a given location in the same or a different image, possibly with various masking operations being performed along the way.
    2. BITBLT can be implemented efficiently in hardware. Further, many fragment operations can be done in parallel via BITBLT. Thus, well-designed graphics hardware will include a BITBLT implementation in order to speed up fragment operations.
    3. The OpenGL pipeline diagram suggests that rasterization algorithms spit out fragments one-by-one, these are processed individually in the “fragment operations” block, and then written one-by-one to the frame buffer. However, in reality, fragment operations will mostly be done in parallel using BITBLT techniques. Thus, on well-designed graphics hardware, the pipeline block diagram does not provide an accurate picture of what happens internally. Note: This is not a problem. OpenGL only specifies effects, not implementation. As long as the image in the frame buffer is the same as if the fragments were processed individually, we have a conforming implementation. And, given that, we would, of course, like the fragment operations to be performed as quickly as possible.
    1. Yes, glCopyPixels is affected by fog settings.
    2. No, glReadPixels is not affected by fog settings.
    3. We know this because fog is done in the “fragment operations” portion of the pipeline. glCopyPixels passes through this block; glReadPixels does not.
    1. OpenGL allows applications to specify that images in arrays are stored so that the data for each row always begins at a memory address divisible by 2, or by 4, or by 8. The unpack alignment is the parameter that specifies what assumptions OpenGL can make about the starting address for each row’s data.
    2. The default unpack alignment is that row data always begins at a memory address divisible by 4. If you have an image with an odd width, and you represent each pixel color with three GLubyte’s, then this assumption will be false, and the image will be read incorrectly.
    1. The mapping techniques we studied are the following:
      • Texture Mapping. Here, each vertex gets texture coordinates, which are interpolated across each polygon so that each fragment gets texture coordinates. A fragment’s texture coordinates are used to look up a color in an image (a texture).
      • Bump Mapping. This is like texture mapping, except that, instead of looking up a color, we look up a normal, and use this to do lighting computations for the fragment. The result is a bumpy-looking surface.
      • Environment Mapping. This is a special case of texture mapping, where we generate texture coordinates based on the direction that light, leaving the viewer’s eye, would go after reflecting off the surface. Thus we simulate mirror-like reflection.
      If you count four techniques, then the fourth is chrome mapping, which is environment mapping in which the goal is simply to make things look metallic. Thus, the “environment” texture need not have any resemblance to the actual environment.
    2. Bump mapping is difficult to implement in normal OpenGL, since it requires per-fragment lighting.
    1. Bump mapping changes only the effect of lighting on a surface, not the shape of the surface. Thus, bump-mapped objects generally look realistically bumpy, except that they have smooth silhouettes.
    2. Bump mapping is good for bumps that are small and inward-facing (like those on a golf ball, or, yes, an orange).
  3. “Cheap” here means not requiring a lot of computation (and therefore not a lot of time). Texturing is cheap compared to increasing the polygon count in a scene, since it only adds lirping over a polygon and a single look-up for each fragment, both of which can be done efficiently in hardware. In contrast, each vertex added must pass through all three transformations, as well as clipping and lighting. Increased polygon count will slow down rasterization as well.
  4. A texture passes through the early stages of the image pipeline, just as an image would when the glDrawPixels command is executed. But then it stops, and interacts with vertex/polygon data passing through the geometry pipeline. The result is that each fragment in a polygon (geometry) gets a color from the texture (image).
  5. Texture coordinates must be specified before the corresponding vertex coordinates. This is because specifying texture coordinates merely sets an OpenGL state, while specifying vertex coordinates triggers the sending of vertex data down the pipeline. States that are to affect a vertex must be set before the vertex goes down the pipeline.


CS 381, Fall 2003: Week 12 Review Problems / Last update: 15 Dec 2002 / Glenn G. Chappell / ffggc@uaf.edu