normals

Hello,

I am thinking about implementing bump mapping in my terrain engine, but as I think about it, a problem comes to mind.

When using normal maps, the normals come frome the map, right? But the per-vertex normals of the terrain mesh are smoothed, so if I would use normal maps instead of per vertex lighting, this smoothing would get lost.

So how to combine these two? I guess that when modulating the per pixel lit and bump mapped surface with the brightness coming from per vertex lighting (which mathematically would mean multiplying), it would become too dark, as the colors from the per pixel lighting process would be “right” (apart from the fact that they are not smoothed), so when modulating them, they would become too dark?

So how does one solve this (combining per pixel lighting with normal maps with smoothed normals gemoetry)?

Thanks
Jan

The bumpmap normals are applied using a texture. This texture could be in “object space” and so your vertex normals don’t get a lookin, but it sounds like you want “tangent space” bump mapping.

With tangent space bump mapping you can apply a tiled pattern, not just real normals from data depending on your intent.

A key concept with tangent space bump mapping is the transformation of vectors to tangent space and their interpolation across the polygon prior to the bumpmap lighting calculation. This is basically the smoothing you think you were going to miss, it isn’t lost it is still there. What you do is you generate a normal, tangent, and binormal vector at each vertex, smoothed as usual.

Their orientation is compatible with your bump map application, and indeed the tangent and binormal vectors are often computed from the derivatives of the texture coords applying the bumpmap texture.

When you perform your bump map lighting calculation you transform the light and view vectors through the “coordinate frame” defined by the normal,tangent,binormal vectors at each vertex (using a vertex program, a.k.a. vertex shader). These transformed vectors will then have different orientations at each vertex. The transformed tangent space vectors are then interpolated (often as a color triplet) to produce per pixel values (this part is done in fixed function hardware). The lighting calculations are then performed using these vectors (sometimes normalized), and the texture vector supplied by the bumpmap texture (this is done using a fragment program, a.k.a. pixel shader).

Search for something like tangent space bump mapping for a better description.

[This message has been edited by dorbie (edited 12-18-2003).]

Thanks dorbie .

Yes I am intending to do tangent space bump mapping, I know what this is and have already implemented a rather simple version (justa bump mapped water surface) succesfully.

In the concept I know (and have done), for diffuse bump mapping, the vector from the light source to the vertex, transformed to surface local tangent space, is passed as texture coordinate to the texture unit containing the normalization cube map (and so becomes linearly interpolated across the surface), and for the specular part, the half-vector (light to vertex / vertex to eye point).

The vertex normals are not a part of this formula, so I guess I am missing something . IF the vertex normal is part of the coordinate system that defines the surface’s tangent space AND is smoothed (i.e. not neccessarily perpendicular to the surface), this would mean that a) the tangent space would be bevelled (is this the right word? “schief”) and b) the tangent space coordinate system would be different for each vertex (due to different normals).

This confuses me. Plese show me where the vertex normal comes into play in the tangent space bump mapping algorithm.

Jan

I think you mean sheared or skewed.

Yes it can be. Ideally you want to make sure that tangent space vectors remain at right angles (orthogonal). But with poorly applied texture or rapidly changing normals sometimes it isn’t. Obviously things can be nasty with certain texture mappings that are skewed i.e. the derivative of s isn’t running at right angles to the derivative of t, same with the normal, but in general it’s pretty close and the normal is the least of your worries, the biggest issues are typically the binormal not matching the cross product of the normal and tangent vectors.

Texture coords can be used for the vector interpolation and this can help with texture based normalization for example.

So basically you need the normal, but in addition it can be implied, for example your fragment program could use tangent x binormal to generate the tangent basis normal. This would be just fine so long as your tangent and binormal vectors are interpolated correctly.

Obviously the bump map normal needs the tangent basis normal to tell it which way is up after transformation, with no normal you don’t have enough information to define the tangent space transformation. I suppose something could be formulated (cross product for example) but this is the way it’s typically done.

The per vertex calculations and normal computations could try to massage results a bit to produce orthogonal tangent space coordinate frames. Texture derivatives should not vary too much between adjacent triangles and tangent, binormal and normal vectors should ALL be averaged at the vertices. Things should work out approximately OK if you do this.

Moreover if you use this coordinate frame (interpolated) to produce the tangent space bumpmap it will work with almost any coordinate frame, you just need to ensure you use the same one when rendering, so with bump map preservation of detail you can probably get away with a lot, although I’d preffer object space bumpmaps in a skinning scenario, which eliminates the tangent space vectors completely unless it’s under deformation (that should demonstrate that the exact orientation isn’t that big of a deal when skinning).

This mostly matters when applying a tangent space texture to an object that’s hand painted (or similarly generated) in tangent space.

[This message has been edited by dorbie (edited 12-19-2003).]

I have to admit that I do not really understand everything of what you’re saying, or maybe we are not talking about the same thing, and some things seem too advance regarding my rather medium OpenGL skills (never done any fragment or vertex program, still messing with ARB combiners and trying to understand the basics).

Let’s use a simpy example, a surface which is per pixel lit with a bump map, and the bump map is flat, with every texel/vector pointing straight up, but the vertex normals are not perpendicular to the surface as the mesh is smoothed to avoid edges.

Now let’s look at the diffuse part. As far as I understand it, the vector from the light source to the surface, being specified at each vertex in the form of texture coordinats for the normalization cube map and thus being interpolated (and normalized) across the surface, is dot3producted with the normal coming from the normal map (which is in fact the color value from the bump map texture), giving a value that speciefies the brightness of every fragment, which is then blended with the decal (color) texture, which gives diffuse bump mapping. In this model, the (smoothed) vertex normal of the mesh data is simply neglected and the resulting image would show a bump mapped model which is however no longer smoothed.

So where does the vertex normal comes into play (or does it at all), this is what I do not understand.

Thanks and Regards,
Jan

[This message has been edited by JanHH (edited 12-19-2003).]

the secret is that you do not have ONE tangent space for one survace but three different (assuming that you have three different normal vectors), right? And then passing the light-vertex vector at each vertex transformed to this vertex’ tangent space and interpolating it accross the surface… this should work . Or am I wrong?

Thanks again,
Jan

Yes, you have a tangent coordinate frame at each vertex (normal, tangent and binormal). The light & view vectors are transformed to tangent space per vertex (so now they’re in the same space as your bump map). Then the vectors are interpolated across the polygon. So in effect you interpolate the tangent space coordinate frames, but you do this by interpolating the transformed vectors. The results are approximately equivalent.

thanks for helping me solving this mystery. Now let’s see if I am able to program this .