Per-pixel diffuse and Terrain

I’m having a lighting problem and I was hoping some of you kind souls could help me diagnose it.

Here are some shots:
(EDIT: problem determined, pictures removed. They looked like vertex lighting, so just imagine that)

Ok first let me say to ignore the horizontal and vertical line artifacts. Those are a product of my chunking system not having enough data to calculate the normals at the chunk boundries. I know how to solve this problem and I am not concerned right now.

I am using register combiners on a GeForce2 to calculate diffuse light per-pixel, and my concern is that this looks like vertex lighting.

The light source is supposed to be the sun, so I am treating it as a directional light at infinity. Therefore I don’t calculate the light->vertex vector for each vertex, I simply use the same light direction vector for all vertices.

To calculate the terrain normals I am using finite differences as explained here: http://www.flipcode.com/cgi-bin/msg.cgi?showThread=Tip-VertexNormalsHeightMaps& forum=totd&id=-1

The normals are set up properly for the combiners, that is to say mapped from [-1.0, 1.0] to [0.0, 1.0].

My register combiner setup is as follows:

// VERTEX NORMAL in PRIMARY_COLOR
// LIGHT VECTOR in CONSTANT_COLOR0
// LIGHT COLOR in CONSTANT_COLOR1
// TEXTURE0 is decal texture

glCombinerParameteriNV(GL_NUM_GENERAL_COMBINERS_NV, 1);

glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_A_NV, GL_CONSTANT_COLOR0_NV, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_B_NV, GL_PRIMARY_COLOR_NV, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_C_NV, GL_TEXTURE0_ARB, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_D_NV, GL_CONSTANT_COLOR1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);

glCombinerOutputNV(GL_COMBINER0_NV, GL_RGB, GL_SPARE0_NV, GL_SPARE1_NV, GL_DISCARD_NV, GL_NONE, GL_NONE, GL_TRUE, GL_FALSE, GL_FALSE);

glFinalCombinerInputNV(GL_VARIABLE_A_NV, GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_B_NV, GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_C_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glFinalCombinerInputNV(GL_VARIABLE_D_NV, GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB);

My theories for what is wrong:

  1. I need to renormalize the vertex normal per pixel.
  2. I really do need to calculate the light->vertex vector instead of using the same light vector for all vertices.
  3. Bad terrain data or precision issue.

Also, I should mention that the light vector and vertex normals are in world-space.

I believe the most probable cause of this is that I am not renormalizing the vertex normal per-pixel. I am thinking about switching over to a normal-map if it will solve my problem. If I used a normal map, how big should it be? One normal per vertex would yield the same result I am getting now, correct? I suppose I should interpolate the normals and renormalize them at each normal-map texel, but is there a rule-of-thumb for the normal-map resolution?

I’d also like to keep the lighting calculations in world-space. I don’t fully understand tangent-space, and I don’t believe it is necessary for this. Also not having to touch the vertex data is nice.

Thanks for your ideas.

[This message has been edited by dismal (edited 02-05-2003).]

[This message has been edited by dismal (edited 02-05-2003).]

How are you rendering this? Vertex arrays? Display lists? On-the-fly glVertex (and such) calls? If the latter two, then did you make sure that the glVertex call is last, after the calls to glNormal, glColour, etc? If not, then it might be that one or more of those vertex attributes is being assigned to the -next- vertex, rather than the current one. This is what gave me a similar problem.

Ostsol, nice idea but I am using vertex arrays. It certainly does seem like something is “off”.

For an infinite light source and diffuse only there is almost no difference between per pixel and per vertex. Only effects like attenuation and local light direction have significant impacts on diffuse linearity over polygons, especially at the resolution of your mesh. You’re calculating none of these effects and so linear interpolation approximates per pixel quite well. The basis for the interpolation of your surface normal is still linear and so you get similar shading artifacts as you do when interpolating color.

[This message has been edited by dorbie (edited 02-05-2003).]

You seem to be interpolating the normal between vertices. However, that will not result in a normalized normal vector. Un-normalized normal vectors usually yield an un-normalized result.

The way to do per-pixel lighting on GF2 is to use NORMAL_MAP texture coordinate generation mode, then looking up the brightness in a diffuse lighting cube map. You can also do it with a normal map, the output of which you dot product with the light vector. For high resolution normal maps, the de-normalization isn’t that bad.

This normal map needs to be in object space, as you don’t have enough oomph in a GF2 to rotate a normal that comes out of a texture.

I’ve done it both these ways, and they both work well enough (for what they are).

dorbie:

Thanks. This explanation makes perfect sense. I’ll have to do some research to choose a lighting model that can benefit from the available hardware features.

jwatte:

I will probably end up switching over to a normal-map approach, as you suggest. The object-space normals are no problem for the terrain.

From your experience, can you offer any answers or suggestions for my normal-map questions in my original post? Specifically, what is your experience with the normal-map’s resolution? One normal per vertex seems logical, but it seems you could get a little better accuracy if you go a step bigger, and renormalize in-between. Or perhaps adding some noise or procedural detail into the map. I should mention that I plan on generating (and caching) the normal-map realtime, as my terrain is paged from disk.

Freeing up the primary color that was being used for the vertex normal will allow me to interpolate the light->vertex vector, if needed. Is this vector usually calculated in a vertex program?

Thanks for the help everyone.