I’m just getting started with fragment programs. I’m trying to figure out if I can access a normal “map” generated by OpenGL from my glNormal3f() calls, or if I have to generate this map for myself.
In examples in the spec for ATI_text_fragment_shader I see stuff like
SampleMap r4, t5.str; # Sample normal map
but I’m unsure of exactly what this does. Also, how can you tell (or assign) what’s in t0, t1, t2, … ?
glNormal3f() defines a per-vertex normal. You should use them in vertex programs …
SampleMap r4, t5.str samples a filtered texel from a texture unit (i think texture unit 4)into the register 4 using texture coordinates t5 (from texture unit 5).
You should generate a normal map texture and fetch per-fragment normals using SampleMap calls.
That does help, thanks.
My question now is - what’s the best way to generate a normal map? I know there are all kinds of tricks like using a preprocessor program, generating one in Photoshop, etc., but I’d like the most modern and automated way of doing it, preferably without importing a texture file (i.e., generating the map within the program).
Normals maps can be generated in many different ways, and the method you choose is likely to depend on what you are attempting to accomplish.
Off the top of my head, three primary categories of normal map generation exist:
Height field derived gradients (presently most common)
Maps derived from a high detail model (up and coming)
Volume gradients (mainly visualization)
The simplest thing I could tell you to do to algorithmically generate one is to create one that will make a flat plane look like a sphere. The procedure is as follows:
Set up a viewport the size of the intended texture.
Clear to black
Draw a sphere with the normals mapped to colors as follows:
r = n.x0.5 + 0.5
g = n.y0.5 + 0.5
b = n.z*0.5 + 0.5
(This can be done using the shader)
CopyTexImage this into your normal map texture.
Draw with it.
For a more complex example of this sort of thing, the ATI developer relations sight has a tool that generates normal maps via ray tracing. You can find it here:
Seriously, it’s a common technique in volume visualization known as volume shading. Pre-computed gradients were the only way to apply external light source to volume graphics before the GeForce FX and Radeon 9700. Now, with more texture fetches, you can also compute the gradients on-the-fly.