Storing normal-data for animated models

Hi

I have a skeletal-based model which has a set of verts making up the model, and then transforms them by the bone-matrix when its animated.

I want to add support for bumpmapping, so I need per-vertex normals, tangents and binormals. I can’t store these the same way, not without getting wierd artifacts anyway, and storing per frame gets very storagehungry… I could split it up (create fake keyframes), but that could look wierd as well…

Any thoughts on this? Are there any “normal” way of doing this? Thanks! :slight_smile:

The normal way is to store the normal and the binormal (or tangent), and then transform them with the inverse transpose of the modelview matrix. This should work just fine; if you’re getting artifacts, then your input data is probably broken, or you’re not normalizing all the relevant quantities.

You don’t need to store all three, as you can derive the third using a cross product, which is typically cheaper than storing and transferring a full vector.

If you don’t use non-uniform scale, then you can actually transform normal and binormal using DP3 instead of DP4; this will apply the approriate rotation without translation. Or, if you take your normal texel into eye space (which is the better way to do it) just forward-multiply as usual, assuming a 0 “w”; same thing, really.

Hmm, but the normals would be dependant on the faces using the given vertex. When the vertices around it moves in different directions, and thereby transforming the faces (which make up the vertex-normal), the normals will be different, not just moved, if you see what I mean.

I haven’t tried this, but surely it would produce incorrect results?

What you’re saying is: when you move the different vertices, the face normals that get blended into the vertex normal shift non-uniformly, so the normal after transform would be different than just applying a single vertex’ blend to the normal. This is a correct observation.

However, all real-time skeletal character animation systems I know of (and that includes the one I work on) do it this way, so it can’t be all that bad :slight_smile: However, don’t take my word for it; implement it yourself and see if it’s good enough or not.

You don’t need to store all three, as you can derive the third using a cross product, which is typically cheaper than storing and transferring a full vector.

That’s not true at all.

The Nornal/Binormal/Tangent vectors do not have to be (and are frequently not) orthogonal. The tangent and binormal vectors are supposed to orient the texture. As such, the tangent should point down the (1, 0) axis of the texture, and the binormal should point down the (0, 1) axis (for the bump map).

However, all real-time skeletal character animation systems I know of (and that includes the one I work on) do it this way, so it can’t be all that bad :slight_smile:

Well, it is clearly the wrong thing to do, as it does not maintain the direction of tangent and binormal vectors.

Interesting, a non orthogonal coordinate frame is technically ‘wrong’. When a texture is under shear then it is a tradeoff between sane vector transformation to tangent space and reorientation of sheared tangent space normals (and interolation of course)… I think strictly speaking if you are arguing for technical accuracy you shouldn’t shear, but that’s not a realistic requirement.

Personally I don’t find a cross product calculation too heinous.

Am I wrong on this? I’d be happy to hear details on why.

Interesting, a non orthogonal coordinate frame is technically ‘wrong’.

In what way? It works just fine for the texture map itself. Also, if the tangent and binormal are properly computed, the matrix transform to texture/tangent space works fine too.

Personally I don’t find a cross product calculation too heinous.

It’s not that the computation is too much. It’s a question of whether or not it is the correct thing to do.

The idea with tangent-space lighting is that you transform the light direction into the space of the texture/tangent at that pixel (or, if you do the transform per-fragment, then at that fragment). If the bump texture is sheered at that point, then the appropriate texture space transform matrix is not orthonormal; it must also be sheered. To not do so leads to artifacts in bump mapping.

There are two camps about this. I think it’s clear which one Korval is in :slight_smile:

You can make arguments for either being “right”. One argument would say that a skewed basis brings your unity light into proper texture space. Another argument would say that that causes severe anisotropy, and even though the artist skewed the texture, they didn’t intend to skew the lighting.

Do the binormals and tangent vectors need to be recalculated when the geometry is animated?

I wouldn’t sweat it, you’re talking about subtle effects unless you really start skewing the resulting matrix.

Well, it is clearly the wrong thing to do, as it does not maintain the direction of tangent and binormal vectors.

Yes, my thoughts as well…
Two other ways would be either per keyframe, which requires too much space, or using every nth keyframe. For a large n, it could look descent for regular animations but messed up for fast ones :slight_smile:

V-Man: Yes they do, because the surfaces will shear (which is what makes it look like it’s moving).

Ofcourse, ALL of them doesn’t really have to be recalculated, usually only the ones near the joints have really changed (much anyway). Perhaps you could take advantage of this…

[This message has been edited by Angelizer (edited 07-19-2003).]