tangentspace for non-textured geometry

I want to explore the possibility to light untextured low-poly geometry with per-pixel-lighting in order to get rid of the typical lighting artifacts like missing/deformed highlights, banding and so on.
Most articles only discuss how to derive correct texturespace for textured models. But how to get a continuous tangentspace for those untextured models? I have the problem that since I have no texture coordinates, I have no hint how to “orient” the tangentspace for each face. Unfortunately this leads to lighting artifacts, since often the averaging of per-face tangentspace in order to get per-vertex tangentspace cancel each other out.
Another problem: will be interpolation of the vertexnormal and doing calculation of L and H in a fragment program better than calculating L and H per vertex and interpolating them? I have seen that especially for neighboured-but-unequal-in-size faces the lighting becomes “malformed” across those faces.

thanks in advance

You needn’t tangent space.
I’d use object space per pixel lighting. Just store your light vectors and normals in the object space and use them as usual.

texture space is just for when you have bumpmaps in some texture space… else you can do the math in object space, too… no problem.

Ok, thanks for the tip. But my test-environment is a pipeline designed for textured and bumpmapped models. I just gave those untextured models a flat bumpmap and sent them through it. But it doesn´t look ok, since those models in question (3DS) often don´t have texture coordiantes or sometimes wrong ones. So the question still remains, how to build good per-vertex tangentspaces for initially untextured models ?

They’ve answered you correctly - but obviously you didn’t write the “pipeline”, so you obviously can’t change it.
You should rephrase your question: how to automatically generate UV coordinates for an arbitary mesh. It really has nothing to do with perpixel lighting.
The answer to your question is beyond the scope of this forum, as it is an area of research in itself.

Regardless the real need of tangent-space vectors (tfpsly and davepermen are absolutely right when they tell that per-pixel lighting algorithms usually only require normals), you can choose whatever tangent-space model you want.
The one I use is the cylindrical model (or whatever the real name of it) which roughly mimics the mapping over a cylinder :

  • the tangent is computed as the cross product between the normal and the cylinder axis (be it (0,0,1) for instance),
  • the binormal is computed (by definition) as the cross product between the normal and the tangent.

This is very easy to compute but has an obvious problem : when a normal is in the direction of the cylinder axis, you’re most likely to compute an erroneous tangent, and so forth an erroneous binormal too.

Originally posted by knackered:
The answer to your question is beyond the scope of this forum, as it is an area of research in itself.

In fact it can be OpenGL-related if skynet makes intensive use of vertex programs to automatically generate UV coordinates.

Didn’t say it wasn’t related to opengl - said it was beyond the scope of the forum in terms of complexity, from knowing nothing on the subject to knowing something. There’s reams of papers on the subject - best to read some of them before a show and tell session here.

Actually, since your only need the normal for the calculation the binormal and tangent only have to help you build a 3*3 matrix for texturespace the angle of those 2 coords doenst have to be specific.

take the normal, add + 1 to one of the axis, normalise it, make a cross product with the normal to get a ‘tangent’ and then take the ‘tangent’ crossproduct the normal to get a ‘binormal’ those will work since you dont have a real normalmap.

Vincoof: well this is really an idea I didn´t come up with yet! If you don´t have texturecoordinates, generate some. I´d like to go for some kind of sphere mapping, using the polar coordinates´s derivates (is this the right word?) as tangent and bitangent. Would you say, this is a right way to go (apart from problems at the poles) ?

What about the other problem, malformed highlights across neighboured faces? Any hints what I could have done wrong here ?

to make it on-topic: “I use OpenGL for all that.” :wink:

Mazy: that’s very true.

Originally posted by skynet:
I´d like to go for some kind of sphere mapping, using the polar coordinates´s derivates (is this the right word?) as tangent and bitangent. Would you say, this is a right way to go (apart from problems at the poles) ?

Apart the problems at the poles, yes that would work. Basically it’s the same method than the one I described above.
Don’t forget to normalize the vectors.

Originally posted by skynet:
What about the other problem, malformed highlights across neighboured faces? Any hints what I could have done wrong here ?

As far as I know, lighting equations that depends on the TBN triplet do assume that the patches are squared, i.e. the partial derivates are orthogonal and equal in magnitude.
So, when the square patch assumption is not verified, chances are that lighting will mess up more or less, depending on how far the square patch assumption is verified.

[This message has been edited by vincoof (edited 07-08-2003).]