CS 381  >  Week 14 Review Problems

CS 381, Fall 2003
Week 14 Review Problems

Below are some problems related to the material covered in class recently. Answers are in the Answers section, below. Do not turn these in.

You can expect the Final Exam (on Wednesday, December 17) to contain problems similar to some of these.

Problems

  1. Perlin intended his procedural-texture techniques primarily for generating something that we did not do. What is it? Hint: We did not do it because it is not supported by unextended OpenGL 1.1, although it is suported by 1.2.
     
  2. Perlin-noise techniques result in a 2-D or 3-D array of numbers. a. What remains to be done, in order to produce a texture? b. Describe a simple, general technique for doing this.
     
  3. Briefly explain what the command glTexGen is for, and how drawing is affected by the OpenGL feature it helps configure. You do not need to give the formulae it uses.
     
  4. In an environment-mapped object, what information must be specified for each vertex?
     
  5. What Phong-Model lighting settings work well with environment mapping?
     
  6. We can use environment mapping to simulate mirror-like reflection. However, it does not quite do the job right. List some ways that environment mapping gets it wrong.
     
  7. a. What is “chrome mapping”? b. When would you use chrome mapping?
     
  8. Suppose you used facet normals with environment mapping. What would the resulting object look like?
     
  9. List one or two movie special effects that could reasonably have been created using environment mapping.
     
  10. Noticeable errors often occur in an image generated using a sphere map. a. Explain. Why do these errors occur? b. What similar technique does not have this problem? How does it work? c. We covered the technique from part b only briefly in class and did not demonstrate it. Why not?
     
  11. List two advantages of cube maps over sphere maps.
     
  12. We discussed a texture-based shadowing technique in class. Briefly explain how it works.
     
  13. a. In environment mapping, when does the texture need to be recomputed? b. In the texture-based shadowing technique discussed in class, when does the texture need to be recomputed? c. Why is it problematic to recompute a texture during animation?
     
  14. a. List the two shadow-creation techniques we covered in class. b. For each technique, indicate what situations it handles well and what situations it does not handle well.
     
  15. In OpenGL lighting, you can specify three colors each for both lights and materials (actually, four for materials, if you count emission, but we will ignore that for now). In practice, however, we rarely make use of all the variation that this allows. Explain the usual, “sensible” way to set up the Phong Model colors for both lights and materials.
     
  16. For each object listed below, indicate what CG technique(s) might be useful in drawing it, and explain briefly how each technique works. You do not need to explain how texture mapping works. a. A grapefruit. b. A shiny ball (e.g., a Christmas-tree ornament). c. An icon, as in a GUI. d. A scene containing a slide show projected on a screen. e. An object made of marble (or other complex veined rock).
     
  17. Suppose you want to draw a complex, relatively realistic-looking scene using efficient (fast) CG methods. What objects or effects will your scene contain, and what methods will you use to render them?
     

Answers

  1. Perlin intended his techniques primarily for generating 3-D textures.
    1. It remains to map numbers to colors, in order to create an image.
    2. There are many ways to do this. The way discussed in class involves choosing specific numbers and the RGB colors they map to. For numbers between the chosen ones, interpolate between colors. Given the chosen numbers and their associated colors, it is not hard to write a function to do the number-to-color conversion.
  2. The glTexGen* command is involved in automatic texture-coordinate generation. Such generation is enabled/disabled with glEnable & glDisable. It is configured using glTexGen*. When automatic texture-coordinate generation is enabled, user-supplied texture coordinates (specified with glTexCoord*, say) are ignored. Instead, OpenGL computes texture coordinates based on various information, which may include vertex coordinates, the normal vector, and the model/view matrix.
  3. When an object is environment-mapped, each vertex needs vertex coordinates (as always) and texture coordinates. But since texture coordinates are computed using the reflected light direction, the information we really need are the vertex coordinates and the normal vector. In addition, if we want the object to be a colored reflector, we might need an object color as well. (In OpenGL, we would use glColor* to specify the object color, and set the texture environment mode to “modulate”.)
  4. None. The Phong Model simulates plastic-y materials that are close to perfectly diffuse. Environment mapping simulates perfectly specular materials. So we generally disable Phong-Model lighting entirely when we do environment mapping. Perhaps we could mix the two to simulate smooth plastic ...
  5. Environment mapping generates incorrect images because:
    1. Chrome mapping is environment mapping in which the texture does not necessarily match the actual environment.
    2. We use chrome mapping when we want to give an object a metallic appearance, and we do not care whether it looks like it is reflecting what lies around it.
  6. Facet normals give a lit object an angular, gem-like appearance. Used with environment mapping, the result would be an angular, mirror-like object, like a gem spray-painted with metallic paint.
  7. Here are some that definitely were created using environment mapping: Perhaps you can think of others.
    1. When a sphere map is used, pixels at the edge of an environment mapped object are often given the wrong color. This results in a fuzzy flicker effect when the object moves. The reason this occurs is that, when directions are turned into texture coordinates, directions directly away from the viewer — close to (0, 0, –1) — are mapped to the ring on the outside of the sphere map. Thus, two directions that are nearly the same may be mapped to texels on the opposite sides of the texture. Standard texture-coordinate interpolation will then paint texels in the middle of the sphere map onto polygons whose colors should all come from the edge of the sphere map.
    2. To avoid this problem, use a cube map: an environment map consisting of 6 textures. These are placed together as the six sides of a cube. They contain a picture of the environment as viewed from the center of the cube. Directions are converted to texture coordinates, but now directions that are close together will turn into texture coordinates that are close together, and so the above problem is avoided.
    3. We did little with cube maps in class because standard OpenGL does not support them. This is because the use of six separate texture images requires a significant amount of bookkeeping work, more than an OpenGL implementation could reasonably expect from arbitrary graphics hardware.
  8. The texture-based shadowing technique works as follows. We create a texture consisting of the silhouette of the shadow-casting object, as seen from the light-source position. When we draw the object on which the shadow is cast, we use this as a texture, and create a texture transformation that projects the shadow properly on the object. Conveniently, the texture coordinates are the same as the vertex coordinates, and the projection used in the texture transformation is essentially the same as the projection used when we rendered the texture.
    1. In environment mapping, the texture depends on the environment and the viewing position. Thus, we need to recompute whenever the environment changes or the viewing position changes.
    2. In the texture-based shadowing technique, the texture depends on the shape, position, and orientation or the object casting the shadow, and the position of the light source. Thus, we need to recompute whenever the object casting the shadow moves or changes shape in any way, or the light source moves.
    3. Loading a texture is not an operation that most graphics libraries (including OpenGL implementations) are designed to do quickly. Thus, reloading a texture during animation is likely to slow down the animation significantly. On some systems, it may slow down the animation a great deal.
    1. The two shadowing techniques we discussed are:
      • Shadows via projection.
      • Shadows via textures.
      • The projection-based shadowing technique can make shadows of any sort of dynamic object on a plane. It does not work well for casting shadows on a complex object. It also does not do self-shadowing well.
      • The texture-based shadowing technique can make shadows of nay sort of static object on any sort of object. It does not work well when the object casting the shadow moves or changes shape. It also does not do self-shadowing well.
    1. A grapefruit. Bump mapping. Create a 2-D array of normal vectors approximating the shape of the surface of a grapefruit. Similar to texture mapping, do a per-fragment look-up in this table, use the normal found there to perturb the surface normal, and light accordingly. Use this technique to put bumps on a yellow sphere (or near-sphere). A grapefruit is basically a big, yellow orange, right?
    2. A shiny ball (e.g., a Christmas-tree ornament).
      • Option #1: Environment mapping. Create a texture containing a (deformed?) view of the environment around the sphere. Map this onto a sphere by using the direction light from the eye would reflect off the sphere to generate texture coordinates.
      • Option #2: Ray tracing. For each pixel, shoot a ray from the eye through the pixel and into the scene. When a ray hits a reflecting object (e.g., the sphere), bounce it off, see what it hits next, etc. Use the information found in this manner to determine the pixel color (and be prepared to wait a bit for the image to be rendered).
    3. An icon, as in a GUI. BITBLT (or glDrawPixels, if you like). Copy a block of pixels to the frame buffer.
    4. A scene containing a slide show projected on a screen. Texture mapping, automatic texture-coordinate generation, texture transformation. The texture should be an image of the slide. Texture coordinates, for the object the slide is projected on, can be automatically generated to be the same as vertex coordinates. The texture transformation is the projection that places the slide on the object it is projected on.
    5. An object made of marble (or other complex veined rock). Procedural texture from Perlin noise. Various scaled copies of a simple noise function are summed to create simulated 1/f noise. Noise values are then mapped to colors so as to simulate veined rock. The resulting (2-D or 3-D) image is used as a texture.
  9. Many, many answers are possible. See the answer to the previous question for ideas.


CS 381, Fall 2003: Week 14 Review Problems / Last update: 16 Dec 2002 / Glenn G. Chappell / ffggc@uaf.edu