Normal Map

Hi,

I am in the process of creating a normal map. Usually when mapping a texture to a polygon, it is done by giving the normal verctor of the polygon and then mapping the texture to the polygon. I have heard that, if the teture is actually a normal map the normal vector information is taken from the image and applied automatically. Has anyone heard of that and how would I go about telling OpenGL that the texture is a normal map and to treat it as such? Is it something like setting a flag or something like that?? I would appreciate any help. Has anyone used it?

You need to use Shaders ie. through the OpenGL Shading Language (GLSL).

And then you need to do all calculations by yourself in that shader. There you can then use the content of your textures (ie. a Normalmap) as you wish.

There is no way to let OpenGL do it for you.

Jan.

It is not taken as the normal automatically (although an application might do this).

You need to set up either texture combiners or some other form of shader to use the normal fetched from the texture and perform a lighting calculation with it using another vector transformed to the same space that the normal map is stored in. Typically this is tangent space and the lighting calculation is a DOT3 texture operation or equivalent shader instruction.

So for example you could have the light vector transformation to tangent space (takes normal tangent and binormal and is done in software or uses a complex shader). Then you have the DOT3 operation using the normal map texture fetch and the tangent space light vector.

For example:

http://www.paulsprojects.net/tutorials/simplebump/simplebump.html

Thank you very much for your reply. I was thinking of applying normal vectors on a per pixel basis and that OpenGL does the calculation then but I must be mistaken. I will have a look at that tutorial…think I saw it before but was a long time ago when i was just starting out.
I am just a little confused because in the beginning in the OpenGL programming guide it never mentions any calculations being necessary when normal vectors are applied so I assumed OpenGL does that for you. Or is that just now because I am texture mapping and actually assigning normal vectors per pixel? Thanks again for the help so far!

The builtin lighting calculations of OpenGL work only per vertex, if you want it per pixel, you have to do it yourself…

Oh I see. I will have a look at the programming guide and everything. Thanks for all your help.

I have done a little bit more research … So if I use a shader i won’t need to declare every pixel as a vertex? Its just that in the bumpmapping tutorial, as far as I can tell vertices are still being declared in order to get the position of them. Also, could somebody maybe explain to me why those cube maps are being used?

I am trying to basically “just” attach a normal map to a polygon and make a 3D model out of it or basically test that lighting is for example working in accordanceand reacting to the normal map.

I am sorry for all the questions, just trying to find some explanations in laymans terms to better understand.

The vertex shader performs the vector transformation to tangent space. These vectors are then interpolated and sent to the fragment shader for each pixel. You end up with per vertex calculated vectors linearly interpolated per pixel. The cubemap is there to normalize a 3 component vector without the need to compute the magnitude using a square root then divide. You simply do a cube map texture fetch using the vector as a coordinate and the result is the new vector. On some hardware this is more efficient than a mathematical normalize operation.