CS 381 Fall 2010 > Lecture Notes for Thursday, November 4, 2010 |
Many objects are drawn so that the user never sees the back side of any polygons. Thus, if we avoid drawing them, we may save time. Throwing out polygons is called culling; when we throw out back-facing polygons, it is back-face culling. In the pipeline, culling is done at the end of the Vertex Operations, after clipping.
In OpenGL, turn on culling with
glEnable(GL_CULL_FACE);
and turn it off with the corresponding glDisable
call.
OpenGL can cull back-facing polygons, front-facing polygons,
or (oddly) both.
Determine which is done using glCullFace
,
passing it GL_BACK
, GL_FRONT
,
or GL_FRONT_AND_BACK
.
For example, to do back-face culling:
glCullFace(GL_BACK);
We could produce the same effect in a GLSL fragment shader—although it might be somewhat less efficient, since it requires the polygon to be rasterized:
if (!gl_FrontFacing) discard;
Note that back-face culling can be used as a simple HSR method. It works when the only thing drawn is a convex object (sphere, cube, etc.), with all polygons facing outward. More generally, it also works when many such objects are drawn, in back-to-front order.
Some objects are drawn so that as a polygon moves, the user ends up seeing both of its sides. In such cases, we may wish to color the two sides differently.
GLSL supports automatic choosing of different colors for front-facing and back-facing polygons. To enable distinct front & back colors, do (in your application):
glEnable(GL_VERTEX_PROGRAM_TWO_SIDE);
I am told that some implementations also require the following in the vertex shader (and it never hurts):
#extension GL_VERTEX_PROGRAM_TWO_SIDE : enable
If you do the above, then you need to set both
gl_FrontColor
and gl_BackColor
in the vertex shader.
In the fragment shader,
gl_Color
is set to the interpolated color value, as usual.
However, for back-facing polygons,
the value of gl_BackColor
is used.
For front-facing polygons,
the value of gl_FrontColor
is used, as before.
In order to light both sides of a polygon correctly, reverse the normal when the polygon is back-facing.
if (!gl_FrontFacing) surfnorm = -surfnorm;
When we do the above, our normals should point toward the front side of the polygon.
Note: The teapot drawn
by glutSolidTeapot
has its normals facing outward,
as you would expect.
However, if you try this two-sided coloring & lighting with the teapot,
then you find that, tragically,
it has the back side of its polygons facing outward.
Thus, the normals point toward the back side of the polygons.
See
twoside_shaders.zip
for examples of shaders that do two-sided lighting
and (if it is enabled in the application)
two-sided coloring.
These shaders are intended for use with
useshader.cpp
.
Major Topics:
A texture is an image that can be painted on a polygon (or some generalization of this idea). Each pixel in a texture is a texel.
Most techniques involving textures have the word “mapping” in them somewhere. I will use map to refer to the data set, i.e., the texture itself. I will use mapping to refer to the technique. So, for example, the act of painting a texture on the polygons making up a surface, is referred to as texture mapping.
In order to use a texture, we need two pieces of information.
glBegin
... glEnd
pair.There is also a texture transformation, which is very similar to the model/view and projection transformations, but is applied to texture coordinates.
Our initial discussion of textures will be organized as follows. We will first discuss the initialization phase: how to make a texture. Next, we discuss the display phase in the application: how to render using a texture in OpenGL. Finally, we discuss how textures are dealt with in GLSL shaders.
See
usetexture.cpp
for a C++ application that uses a texture.
In order to execute it, you will need
(as with useshader.cpp
)
separate text files holding the shader source,
as well as a machine with programmable graphics hardware,
and the GLEW package installed.
The obvious way to get an image is to load it from a file. I will leave it to you to figure out how to do this. For now, I will make textures using a “quick and dirty” method: creating them from a string array. Eventually, we will use OpenGL to render a texture image. In any case, making a texture is usually something you do in the initialization section of your program. Doing it during display can be slow. Further, if the texture does not change, then it only needs to be made once.
When it is turned over to OpenGL,
a texture image should be stored in a 2-D array of color values,
with each dimension being a power of 2.
The values can be either RGB or RGBA.
I usually store a color component
as a GLubyte
,
in which case its value is in the range [0,255].
enum { IMG_WIDTH = 8, IMG_HEIGHT = 8 }; GLubyte texImage[IMG_HEIGHT][IMG_WIDTH][3]; // Texture temp storage // The image // 3rd subscript 0 = R, 1 = G, 2 = B
The last dimension above would be 4 if colors are stored as RGBA.
To send the texture to OpenGL, we first generate a texture name.
This is an integer (GLuint
)
that identifies the texture to OpenGL.
If we have more than one texture loaded, then we can specify
which one to use, by its name.
I generally store names in a global array.
enum { NUM_TEXTURES = 1 }; GLuint texNames[NUM_TEXTURES];
To generate texture names, call glGenTextures
,
with the number of names required,
and a pointer to the array.
glGenTextures(NUM_TEXTURES, texNames);
Once we do this, the first texture name is stored in texNames[0]
,
the second (if NUM_TEXTURES
is greater than 1)
in texNames[1]
, etc.
In most OpenGL commands,
a texture is referred to by a target, which is a predefined OpenGL
constant.
For a 2-D texture image, the target is GL_TEXTURE_2D
.
In order to make sure that target refers to the proper texture,
we bind the texture to the target,
by calling glBindTexture
with the target and the texture’s name.
glBindTexture(GL_TEXTURE_2D, texNames[0]);
After doing the above,
each time we use GL_TEXTURE_2D
,
we are referring to the texture whose name is stored in
texNames[0]
.
This remains true until we bind a different texture.
At this point, we are ready to give the texture to OpenGL.
Make sure the array holds the proper color data,
and then call one of two functions.
The first is glTexImage2D
.
This has nine parameters.
All are integers except the last, which is a pointer.
GL_TEXTURE_2D
.
Remember that, since we bound our texture to this target,
this is really a reference to the named texture.GL_RGBA
.GL_RGB
if your array is as above.
Make it GL_RGBA
if your array holds 4-component colors.GLubyte
values,
make this GL_UNSIGNED_BYTE
.Example call:
glTexImage2D(GL_TEXTURE_2D, // target 0, // level GL_RGBA, // internalFormat IMG_WIDTH, IMG_HEIGHT, // width, height 0, // border GL_RGB, // format GL_UNSIGNED_BYTE, // type &texImage[0][0][0]); // data
An alternate, and simpler, method
is to use the GLU wrapper
gluBuild2DMipmaps
.
We will discuss MIPmaps later.
For now, you may simply call this function,
with the same parameters as glTexImage2D
,
except for level and border.
Example call:
gluBuild2DMipmaps(GL_TEXTURE_2D, // target GL_RGBA, // internalFormat IMG_WIDTH, IMG_HEIGHT, // width, height GL_RGB, // format GL_UNSIGNED_BYTE, // type &texImage[0][0][0]); // data
Once this is done, OpenGL has stored a copy of the texture. The information in the array is no longer needed. In particular, the array may be reused for another texture.
Lastly, we will want to send our textures to our GLSL shaders.
This is done via a texture channel
(a.k.a. texture unit).
Texture channels are numbered: 0, 1, 2, etc.
Generally, we send each texture over a different channel.
To indicate which texture channel is to be used,
just before binding a texture name,
call glActiveTexture
with the proper channel,
specified by an OpenGL constant:
GL_TEXTURE
followed by a number.
glActiveTexture(GL_TEXTURE0); // Texture channel 0 glBindTexture(GL_TEXTURE_2D, texNames[0]);
We need to send our texture(s) to the shaders.
Shaders access a texture using a variable of “sampler”
type; we discuss these in the next subsection.
For now, we assume that for each texture, there is a uniform
sampler variable.
From the application point of view, a sampler only needs to be
sent a single integer.
This is the number of the texture channel over which is should
receive a texture.
For example, suppose that our program object is prog1
,
and a shader has a uniform
sampler variable
named myTex0
,
which should receive a texture over channel 0.
Then we do the following in our display function.
GLint loc = glGetUniformLocationARB(prog1, "myTex0"); glUniform1iARB(loc, 0); // 0 is texture channel
When we draw a textured polygon,
we need to specify texture coordinates for each vertex.
This is done, for each vertex,
with glTexCoord*
.
For 2-D textures, we generally use glTexCoord2d
.
glBegin(GL_TRIANGLES); ... glNormal3d(1., 2., 1.); glTexCoord2d(0.5, 0.7); glVertex3d(2., 5., -3.); ... glEnd();
We can modify texture coordinates using the texture transformation. Like vertices, texture coordinates are sent through the pipeline as 4-D vectors in homogeneous form. Thus, the texture transformation is performed just like the model/view and projection transformations: by multiplication by a 4 × 4 matrix, with the 4th-coordinate division being necessary to get a useable 3-D (or 2-D) vector.
To set the texture transformation,
using GL_TEXTURE
as the matrix mode.
Be sure to go back to model/view mode when you are done.
Also, a texture transformation is sent to shaders over
a texture channel;
this can be the same channel as a texture image.
Be sure to set the channel before moving to texture-matrix mode.
glActiveTexture(GL_TEXTURE0); glMatrixMode(GL_TEXTURE); glLoadIdentity(); glRotated(20., 0., 0., 1.); // 2-D, so z-axis rotation glMatrixMode(GL_MODELVIEW);
Texture coordinates are given to the vertex shader in
gl_MultiTexCoord0
,
which is an attribute vec4
in homogeneous form.
The texture transformation is given in
gl_TextureMatrix[
channel]
,
which is a uniform mat4
,
with channel being the texture channel number.
Texture coordinates can be sent to the fragment shader
in an item of the array gl_TexCoord
.
Each item of this array is a varying vec4
.
It should be stored in homogeneous form;
the interpolation is done correctly.
Putting all this together, when using textures, you almost certainly want the following line in your vertex shader.
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
In GLSL, a sampler variable allows for resolution-independent
access to an image.
A 2-D texture should be accessed using a variable of type
sampler2D
, which should be uniform
.
Usually, we only need such a variable in our fragment shader.
uniform sampler2D myTex0;
Associated with each sampler type is a look-up function.
For 2-D samplers, the function is texture2D
.
This takes a sampler and a vec2
and returns the color in a vec4
.
Again, putting all this together,
in our fragment shader,
we need to do the 4th-coordinate division on our texture coordinates,
and then do the color lookup using texture2D
.
vec2 texcoord = gl_TexCoord[0].st / gl_TexCoord[0].q; vec4 texturecolor = texture2D(myTex0, texcoord);
Above, the “.st
”
and “.q
”
are using the convention that texture coordinates are
referred to as s, t, p, q.
Using “.xy
”
and “.w
”
would have the same effect.
See
basictex_shaders.zip
for examples of shaders that use textures.
These shaders are intended for use with
usetexture.cpp
.
Introduction to Textures will be continued next time.
ggchappell@alaska.edu