Easy peasy bump mapping?

I’m thinking of implementing bump mapping in my engine. What is the easiest way of doing this?

Dot3 seems “rather” easy but I have skinned animated meshes… how will this work?

I’m thinking on transforming the light vector into the space of each vertex. That is, I will end up with a vector of light positions (one for each bone matrix) that I can use when calculating the color of each vertex using tangent space. Will this work or do I need to transform the tangent space vectors in some way?

regards

hObbE

Hmmm I was about to say that cross-posting is bad, but according to the other post it’s welcome.

Anyways, your method is (at least “seems”) fine. All you need is to take care that dot products are coherent in spaces. That is, if you have the triplet tangent+normal+binormal in tangent-space for each vertex, then you need to compute the light vector in tangent-space for each vertex.
If you prefer computing bump-mapping in object-space, then get BOTH vertex coordinates (and normals etc) in object-space AND get light vector in object-space.
Idem for world-space or whatever-space.

As long as you’re coherent, either is fine. It’s just a matter of preference then.

Edit:
Ok I think I got what your problem really is.
You wonder if you have to compute your tangent-space triplets tangent/normal/binormal every time the mesh changes ?
The answer is : yes unfortunately.
And if you wonder if you have to compute the triplets when the light(s) move(s), the answer is no (except if the mesh changes obviously).

[This message has been edited by vincoof (edited 09-17-2002).]

I agree that cross-posting is undesierable, and I will be more carefull when considering the advancedness of the questions I have.

Thx for the reply!

I will soon try to implement it too. If all works out I may post some comments and what was difficult…

/hObbE

If your mesh is skinned, recomputing the tangent/binormal shouldn’t be any more difficult than recomputing the normal. It’ll take more time, however.

You may also want to consider doing the tangent-space transform per-pixel for skinned meshes (if your hardware supports it). It tends to look more correct.

Ok… the reason for needing to recompute the tangent vectors should then be that if one vertex (in a triangle) moves, the texture coordinates of the triangle has another relative orientation (that is tangent space has been “distorted”).

Is this explanation correct?

Recomputing the tangent space vectors does however fit rather nicely in my renderer… but there of cource will be a performance hit…

Any hints on how to compute/recompute the tangent vectors? Maybe you could use the matrix of the vertex some way (just as I do with the normal)?

Hopefully I will be able to test something of this later today. (Damn work… always getting in the way of the fun stuff )

regards!

/hObbE

… You have a point. How would you go about doing this in a vertex program? I wonder if nVidia or someone has a paper/demo on this somewhere.

For skinned meshes, you may want to consider using object space bump maps. While these don’t lend themselves as much to sharing, they have several properties that make them more efficient.

Specifically, you don’t need a tangent space at all; instead, you just skin the light into object space, using the transpose of the matrix you would have used to skin the normal. Then you dot the light vector with the normal as pulled out of the normal map, and you’re done. Much more efficient!

Object Space Bump Mapping!?!?
Don’t you need GF3 hardware for that? That is transforming the bump map normals into object space? Or is there some other neato way of doing that? (I’m working on GF2)

Of cource I was busy all day yesterday so implementation will probably have to wait until the weekend

First implementation will be getting the dot3 bump mapping to work for non moving objects, then for moving objects and last for skinned meshes…

Also I will not be using any vertex shaders or something like that. The bumpmapping implementation will, however, move into a vertex shader later on… Depending on how fast the GF2 vertex shaders can be…

Thx for the replies!

/hObbE

Tangent-space :

  1. For each vertex, compute the triplet tangent/normal/binormal, everytime the mesh changes. The normal expressed in tangent-space is straight-forward : it is always the vector (0,0,1).
  2. For each vertex, compute the light vectors, in tangent-space (thanks to the triplets computed above) everytime the light moves relatively to the object (that is, if the light moves, if the object moves, if the object rotates, and of course if the mesh changes)

Object-space :

  1. For each vertex, compute the normals in object-space everytime the mesh changes
  2. For each vertex, compute the light vectors in object-space everytime the light moves relatively to the object.

Tangent-space approach has the ability to use a constant normal vector (0,0,1) for every vertex, which simplifies the equations.

Object-space approach is useful for directional lights because the light vectors are identical for every vertex of the mesh, thus saves the computation of light vectors in this particular case. Moreover the object-space approach doesn’t need the whole triplet tangent/normal/binormal of all vertexes : it just uses the normal.

World-space based approches are not very popular but keep in mind it can be useful, depending on you application. The worst point in the world-space approach is that you have to compute the normals of each vertex in world-space every time the object moves or rotates.

[This message has been edited by vincoof (edited 09-19-2002).]

well now I am totally confused…

Tangent space: This I understand

Object space: This I don’t understand…
Especially since it seems soo much easier than tangent space. Whats the catch?!

I was under the clear impression that the normals from the bump map (that are expressed in tangent space (or?)) also need to be transformed into object (or world) space. That is I need to go through my texture and update the color of every pixel (or something)…

Any papers, tuts or other explanations?

World space:
This seems interesting especially since I manually transform all my vertices(with normals) to world space. So if I just specify a light normal (rgb triplet) for the light vector then I could do dot3 bump mapping in world space!?

According to my own explanation about the normals of the bumpmap beeing in tangent space I would also have to convert them to world space (which would be as easy as converting them to object space)

This is becoming more and more interesting for every post!

regards!

/hObbE

For more details about object-space implementations, I think that jwatte could answer since he’s much more enthousiast than me in this technique.

What I do not like into the object-space approach is that you need a specific bump texture for every model in your world (at least from what I’ve understood, because I’ve never implemented object-space bump-mapping).

The best paper I’ve found over the web is Mark Kilgard’s GDC 2000 Practical and Robust Bump-mapping Technique which deals with NVIDIA implementations (register combiners) but the first chapters work on mathematical concepts (so forth, independant to NVIDIA cards) that I recommend to read.

>>What I do not like into the object-space approach is that you need a specific bump texture for every model in your world (at least from what I’ve understood, because I’ve never implemented object-space bump-mapping).<<

that is correct cause each underlying polygon has a different normal each polygon in the mesh will have to have its own ‘piece’ of the texture with tangentspace u dont need this

on a personal note i done object spacebumpmapping first in software (at the time there was no bumpmappin examples on the net like now) i done objectspace cause it seemed more ‘logical’?

[This message has been edited by zed (edited 09-19-2002).]

Tangent space is looking down on the surface and is local to the surface. In other words it’s the space defined by the coordinate frame with normal, tangent and binormal.

Object space is the space defined by the untransformed vertex data, where x, y & z coordinates align to the x, y and z axes and normally would be defined in that space. It solves very nasty artifacts where you want to generate a normal map from a high res mesh for example. Papers have described how to do this in tangent space but they have fancy underlying surface reconstruction which might not work well with simple triangles. The vector would have to be relative to the interpolated coordinate frame. In some cases it’s much simpler just to store the explicit orientation of the surface for that kind of stuff, the analogy would be a light map that has a texel distribution over the surface to store light maps, in the same way you have a texel for each normal sample you want to use.

That’s why it’s already been said that for a bump map to exist in object space every point on the surface needs a unique normal texel (typically).

For tangent space the bump map exists more like a normal peturbation map (as described by Blinn) and the orientation of the normal map is relative to the coordinate frame. Kinda like a repeating wallpaper you can apply (although you don’t have to repeat, it becomes possible to reuse those normal peturbations because it is relative).

With skinning even an object space normal would have to be transformed through the skinning matrices, and infact it would be the light vector that you transformed through the inverse, whereas with a tangent space representation it would be the coordinate frame that got transformed before the light was transformed to tangent space using the coordinate frame.

[This message has been edited by dorbie (edited 09-20-2002).]

Dorbie and vincoof,

With object space bump maps, you do NOT have to transform the normal by the skin matrix. Instead, you transform the light (and, for specular, the viewer) into un-skinned object space. This is simply the transpose of the upper-left 3x3 of your vertex position skinning matrix. And this means you don’t have to send a tangent space base into the GL, so there’s less data to transfer (especially if you’re doing software skinning).

You also don’t need a normal map per instance, although you need a normal map per mesh type to do it.

If you want reflection maps, you’ll have to post-transform back to cube map space, though, so at that point the two methods become more or less computationally and data transfer equivalent, I THINK (I’m still working this part out for myself :slight_smile:

If I understand it right you have to have an (object) space bumpmap for every mesh model.
So If you have an animted mesh you should have a bumpmap for every key frame?
And can you interpolate those bumpmaps? Or do I miss the point.

Charles

you wouldn’t want to use a bumpmap for each keyframe, but my guess is that if you are using quake-style model keyframes (rather than skeletal animation) you would have a problem as you don’t have explicit matrices to tranform the light and viewer back into unskinned object space. that is - when you generate the keyframes, you lose all the transformation data.

rather than keyframe bumpmaps, i’m sure there would be a way to do it by working out the differences in the plane equations for each triangle for each keyframe, and generating a matrix from that, but for object space bumpmapping, skeletal mesh skinning would be much easier. then you can use the bone matrices and vertex weights rather than one matrix for each triangle (ouch!)

am i right jwatte? you’re a clever guy…

[This message has been edited by vshader (edited 09-21-2002).]

If you were using keyframed meshes and wanted tangent space bump mapping instead of object space, then you’d have to store a full tangent space per frame, which gets expensive about as fast as storing an object space bump map per keyframe.

If you’re blowing out your entire mesh for each frame, you lose, no matter what. My description is intended for skeletal animation with skinning (which, coincidentally, is what the initial question was about, too :slight_smile:

Come to think of it, I think that translating the reflection vector out again DOES use as much data as tangent space mapping (a full 3x3 has to be sent to the fragment stage) BUT it’s less math, because with tangent space, you ALSO need to transform the reflection vector, and you’re still saving the transformation of the three tangent space basis vectors.

Coincidentally, I got a 9700 the other day, so now I’m hitting the ATI developer driver site about every thirty minutes waiting for ARB_fragment_program supporting drivers :slight_smile:

Just wanted to say thanx for the replies guys! Will read and try to understand everything during the day.

As faar as i have understood it:

TangentSpace: Good for static stuff that repeats itself. For example two opposite facing walls can have the same bumpmap without any problems

ObjectSpace: Good for stuff that moves (skinned meshes). Every triangle needs their own part of a bumpmap (this is what I meant but thanx for clearing it up Dorbie). In a skinned mesh this is not a problem since the mesh has a texture that already is per triangle. If you have two opposite facing walls sharing the same bumpmap one wall would have bumps the other dimples.

Would be very interesting to see some screens from all the bumpmapping stuff people are/have been working on.

I will sure post mine when I have something to show that is

Regards!
/hObbE

[This message has been edited by tobiaso (edited 09-23-2002).]

Yep you’re right jwatte, it’s the same principal as moving the light to object space. More efficient too.

As for object space needing a texture for each triangle, this is not what I said tobiaso. They just need an unique mapping in texture space. Each position on the object needs a unique position on the texture map. Kind of like character skinning but much more strict about reuse.

On other comments, you CAN keyframe an object space vector representation. Transforming the light position back through the mesh deformation matrix to object space gives you the correctly animated mesh. This is what jwatte pointed out. So for a light in world space you go back through inverse model matrix then inverse mesh deform (probably multiple matrices with weighted interp for this one) and you are in the same space the object normal is stored in.

The transformation chain looks something like this:

Tangent Space --> Object Space --(multimatrix interp)–> Deformed Mesh Space --> World Space --> Eye Space

Depending on where you have your light & other vector representations, and where you do your lighting calculation you need to apply the appropriate transformations to all vectors to get to the same space before the dot products are performed.

It doesn’t really matter what space you have them in or what space you do your dot products in except that you want to minimize the work, minimize artifacts and have an intuitive workable data representation.

You could transform everything to tangent space or eye space and it would work, however only some things are possible without complex fragment level vector transformation, so you are restricted to the spaces where a simple texture fetch gets you the correct normal vector. i.e. tangent space or object space. The others change too much. Sure you could store multiple normal maps vectors for each deformed mesh space but you’d need to interpolate the normal colors in the fragment shader before lighting dot products using multitexture based on mesh weighting and it would be an extremely inefficient data representation with a big normal map for each keyframe. It’s not how anyone would do it.

I think the confusion may arise from the way meshes have been none in the past vs the future. Object space normal representations on characters imply live skeletal deformation matrices, not explicitly hand crafted meshes with unique normals. So object space is the undeformed object mesh with identity skeletal transofrmations and that’s where the high detail object space normals are computed.

For static predeformed character animation meshes as seen in some games you could store some kind of coordinate frame at the vertices in the deformed meshes as a cue to the correct object space orientation, but you need something to tell you how to transform the vectors to object space (or tangent space) even if you’re not doing skeletal deformations on the fly. Anything else is just impractical.

[This message has been edited by dorbie (edited 09-23-2002).]

Thanks all for the explanations. They’re very clear and very useful !

Now I see better where object-space bump-mapping is the best choice.
I used to think that “bump-mapping”, as it sounds, was useful for “bumps”. Thus I have taken for granted that bump-maps could only add riddles or waves or any significant irregularity over a surface. In that point of view, tangent-space is the best choice, because you can switch with any other bump texture very easily.

But bump-mapping could also be used in order to increase the “geometric effect” of a surface. Say you have a logo over a surface, the logo being a texture, and you want to “engrave” the logo into the surface. In that case, you can assume that the logo is “fixed” over the surface and consequently you can compute a specific bump texture for this surface. In that case object-space bump-mapping is better because it’s less cpu-intensive (and less gpu-intensive too) and the low flexibility for the bump texture selection is not a problem because it is taken for granted that the bump-map is fixed.

Though I still don’t see where world-space or eye-space bump-mapping would be really useful. They have some advantages for sure, but the disadvantages are too big IMHO.