View Full Version : Easy peasy bump mapping?

tobiaso

09-17-2002, 02:45 AM

I'm thinking of implementing bump mapping in my engine. What is the easiest way of doing this?

Dot3 seems "rather" easy but I have skinned animated meshes... how will this work?

I'm thinking on transforming the light vector into the space of each vertex. That is, I will end up with a vector of light positions (one for each bone matrix) that I can use when calculating the color of each vertex using tangent space. Will this work or do I need to transform the tangent space vectors in some way?

regards

hObbE

vincoof

09-17-2002, 03:29 AM

Hmmm I was about to say that cross-posting is bad, but according to the other post it's welcome.

Anyways, your method is (at least "seems") fine. All you need is to take care that dot products are coherent in spaces. That is, if you have the triplet tangent+normal+binormal in tangent-space for each vertex, then you need to compute the light vector in tangent-space for each vertex.

If you prefer computing bump-mapping in object-space, then get BOTH vertex coordinates (and normals etc) in object-space AND get light vector in object-space.

Idem for world-space or whatever-space.

As long as you're coherent, either is fine. It's just a matter of preference then.

Edit:

Ok I think I got what your problem really is.

You wonder if you have to compute your tangent-space triplets tangent/normal/binormal every time the mesh changes ?

The answer is : yes unfortunately.

And if you wonder if you have to compute the triplets when the light(s) move(s), the answer is no (except if the mesh changes obviously).

[This message has been edited by vincoof (edited 09-17-2002).]

tobiaso

09-17-2002, 03:37 AM

I agree that cross-posting is undesierable, and I will be more carefull when considering the advancedness http://www.opengl.org/discussion_boards/ubb/wink.gif of the questions I have.

Thx for the reply!

I will soon try to implement it too. If all works out I may post some comments and what was difficult...

/hObbE

Korval

09-17-2002, 09:41 AM

If your mesh is skinned, recomputing the tangent/binormal shouldn't be any more difficult than recomputing the normal. It'll take more time, however.

You may also want to consider doing the tangent-space transform per-pixel for skinned meshes (if your hardware supports it). It tends to look more correct.

tobiaso

09-17-2002, 10:03 PM

Ok... the reason for needing to recompute the tangent vectors should then be that if one vertex (in a triangle) moves, the texture coordinates of the triangle has another relative orientation (that is tangent space has been "distorted").

Is this explanation correct?

Recomputing the tangent space vectors does however fit rather nicely in my renderer... but there of cource will be a performance hit...

Any hints on how to compute/recompute the tangent vectors? Maybe you could use the matrix of the vertex some way (just as I do with the normal)?

Hopefully I will be able to test something of this later today. (Damn work... always getting in the way of the fun stuff http://www.opengl.org/discussion_boards/ubb/wink.gif )

regards!

/hObbE

Korval

09-18-2002, 10:11 AM

... You have a point. How would you go about doing this in a vertex program? I wonder if nVidia or someone has a paper/demo on this somewhere.

jwatte

09-18-2002, 10:13 AM

For skinned meshes, you may want to consider using object space bump maps. While these don't lend themselves as much to sharing, they have several properties that make them more efficient.

Specifically, you don't need a tangent space at all; instead, you just skin the light into object space, using the transpose of the matrix you would have used to skin the normal. Then you dot the light vector with the normal as pulled out of the normal map, and you're done. Much more efficient!

tobiaso

09-18-2002, 11:25 PM

Object Space Bump Mapping!?!?

Don't you need GF3 hardware for that? That is transforming the bump map normals into object space? Or is there some other neato way of doing that? (I'm working on GF2)

Of cource I was busy all day yesterday so implementation will probably have to wait until the weekend http://www.opengl.org/discussion_boards/ubb/frown.gif ...

First implementation will be getting the dot3 bump mapping to work for non moving objects, then for moving objects and last for skinned meshes...

Also I will not be using any vertex shaders or something like that. The bumpmapping implementation will, however, move into a vertex shader later on.... Depending on how fast the GF2 vertex shaders can be...

Thx for the replies!

/hObbE

vincoof

09-19-2002, 01:02 AM

Tangent-space :

1. For each vertex, compute the triplet tangent/normal/binormal, everytime the mesh changes. The normal expressed in tangent-space is straight-forward : it is always the vector (0,0,1).

2. For each vertex, compute the light vectors, in tangent-space (thanks to the triplets computed above) everytime the light moves relatively to the object (that is, if the light moves, if the object moves, if the object rotates, and of course if the mesh changes)

Object-space :

1. For each vertex, compute the normals in object-space everytime the mesh changes

2. For each vertex, compute the light vectors in object-space everytime the light moves relatively to the object.

Tangent-space approach has the ability to use a constant normal vector (0,0,1) for every vertex, which simplifies the equations.

Object-space approach is useful for directional lights because the light vectors are identical for every vertex of the mesh, thus saves the computation of light vectors in this particular case. Moreover the object-space approach doesn't need the whole triplet tangent/normal/binormal of all vertexes : it just uses the normal.

World-space based approches are not very popular but keep in mind it can be useful, depending on you application. The worst point in the world-space approach is that you have to compute the normals of each vertex in world-space every time the object moves or rotates.

[This message has been edited by vincoof (edited 09-19-2002).]

tobiaso

09-19-2002, 01:35 AM

well now I am totally confused... http://www.opengl.org/discussion_boards/ubb/smile.gif

Tangent space: This I understand

Object space: This I don't understand...

Especially since it seems soo much easier than tangent space. Whats the catch?!

I was under the clear impression that the normals from the bump map (that are expressed in tangent space (or?)) also need to be transformed into object (or world) space. That is I need to go through my texture and update the color of every pixel (or something)...

Any papers, tuts or other explanations?

World space:

This seems interesting especially since I manually transform all my vertices(with normals) to world space. So if I just specify a light normal (rgb triplet) for the light vector then I could do dot3 bump mapping in world space!?

According to my own explanation about the normals of the bumpmap beeing in tangent space I would also have to convert them to world space (which would be as easy as converting them to object space)

This is becoming more and more interesting for every post!

regards!

/hObbE

vincoof

09-19-2002, 02:18 AM

For more details about object-space implementations, I think that jwatte could answer since he's much more enthousiast than me in this technique.

What I do not like into the object-space approach is that you need a specific bump texture for every model in your world (at least from what I've understood, because I've never implemented object-space bump-mapping).

The best paper I've found over the web is Mark Kilgard's GDC 2000 Practical and Robust Bump-mapping Technique (http://www.nvidia.com/view.asp?IO=Practical_Bumpmapping_Tech) which deals with NVIDIA implementations (register combiners) but the first chapters work on mathematical concepts (so forth, independant to NVIDIA cards) that I recommend to read.

>>What I do not like into the object-space approach is that you need a specific bump texture for every model in your world (at least from what I've understood, because I've never implemented object-space bump-mapping).<<

that is correct cause each underlying polygon has a different normal each polygon in the mesh will have to have its own 'piece' of the texture with tangentspace u dont need this

on a personal note i done object spacebumpmapping first in software (at the time there was no bumpmappin examples on the net like now) i done objectspace cause it seemed more 'logical'?

[This message has been edited by zed (edited 09-19-2002).]

dorbie

09-20-2002, 11:30 AM

Tangent space is looking down on the surface and is local to the surface. In other words it's the space defined by the coordinate frame with normal, tangent and binormal.

Object space is the space defined by the untransformed vertex data, where x, y & z coordinates align to the x, y and z axes and normally would be defined in that space. It solves very nasty artifacts where you want to generate a normal map from a high res mesh for example. Papers have described how to do this in tangent space but they have fancy underlying surface reconstruction which might not work well with simple triangles. The vector would have to be relative to the interpolated coordinate frame. In some cases it's much simpler just to store the explicit orientation of the surface for that kind of stuff, the analogy would be a light map that has a texel distribution over the surface to store light maps, in the same way you have a texel for each normal sample you want to use.

That's why it's already been said that for a bump map to exist in object space every point on the surface needs a unique normal texel (typically).

For tangent space the bump map exists more like a normal peturbation map (as described by Blinn) and the orientation of the normal map is relative to the coordinate frame. Kinda like a repeating wallpaper you can apply (although you don't have to repeat, it becomes possible to reuse those normal peturbations because it is relative).

With skinning even an object space normal would have to be transformed through the skinning matrices, and infact it would be the light vector that you transformed through the inverse, whereas with a tangent space representation it would be the coordinate frame that got transformed before the light was transformed to tangent space using the coordinate frame.

[This message has been edited by dorbie (edited 09-20-2002).]

jwatte

09-20-2002, 07:00 PM

Dorbie and vincoof,

With object space bump maps, you do NOT have to transform the normal by the skin matrix. Instead, you transform the light (and, for specular, the viewer) into un-skinned object space. This is simply the transpose of the upper-left 3x3 of your vertex position skinning matrix. And this means you don't have to send a tangent space base into the GL, so there's less data to transfer (especially if you're doing software skinning).

You also don't need a normal map per instance, although you need a normal map per mesh type to do it.

If you want reflection maps, you'll have to post-transform back to cube map space, though, so at that point the two methods become more or less computationally and data transfer equivalent, I THINK (I'm still working this part out for myself :-)

Pentagram

09-21-2002, 05:15 AM

If I understand it right you have to have an (object) space bumpmap for every mesh model.

So If you have an animted mesh you should have a bumpmap for every key frame?

And can you interpolate those bumpmaps? Or do I miss the point.

Charles

vshader

09-21-2002, 07:56 AM

you wouldn't want to use a bumpmap for each keyframe, but my guess is that if you are using quake-style model keyframes (rather than skeletal animation) you would have a problem as you don't have explicit matrices to tranform the light and viewer back into unskinned object space. that is - when you generate the keyframes, you lose all the transformation data.

rather than keyframe bumpmaps, i'm sure there would be a way to do it by working out the differences in the plane equations for each triangle for each keyframe, and generating a matrix from that, but for object space bumpmapping, skeletal mesh skinning would be much easier. then you can use the bone matrices and vertex weights rather than one matrix for each triangle (ouch!)

am i right jwatte? you're a clever guy...

[This message has been edited by vshader (edited 09-21-2002).]

jwatte

09-21-2002, 03:58 PM

If you were using keyframed meshes and wanted tangent space bump mapping instead of object space, then you'd have to store a full tangent space per frame, which gets expensive about as fast as storing an object space bump map per keyframe.

If you're blowing out your entire mesh for each frame, you lose, no matter what. My description is intended for skeletal animation with skinning (which, coincidentally, is what the initial question was about, too :-)

Come to think of it, I think that translating the reflection vector out again DOES use as much data as tangent space mapping (a full 3x3 has to be sent to the fragment stage) BUT it's less math, because with tangent space, you ALSO need to transform the reflection vector, and you're still saving the transformation of the three tangent space basis vectors.

Coincidentally, I got a 9700 the other day, so now I'm hitting the ATI developer driver site about every thirty minutes waiting for ARB_fragment_program supporting drivers :-)

tobiaso

09-22-2002, 09:16 PM

Just wanted to say thanx for the replies guys! Will read and try to understand everything during the day.

As faar as i have understood it:

TangentSpace: Good for static stuff that repeats itself. For example two opposite facing walls can have the same bumpmap without any problems

ObjectSpace: Good for stuff that moves (skinned meshes). Every triangle needs their own part of a bumpmap (this is what I meant but thanx for clearing it up Dorbie). In a skinned mesh this is not a problem since the mesh has a texture that already is per triangle. If you have two opposite facing walls sharing the same bumpmap one wall would have bumps the other dimples.

Would be very interesting to see some screens from all the bumpmapping stuff people are/have been working on.

I will sure post mine when I have something to show that is http://www.opengl.org/discussion_boards/ubb/wink.gif

Regards!

/hObbE

[This message has been edited by tobiaso (edited 09-23-2002).]

dorbie

09-22-2002, 09:40 PM

Yep you're right jwatte, it's the same principal as moving the light to object space. More efficient too.

As for object space needing a texture for each triangle, this is not what I said tobiaso. They just need an unique mapping in texture space. Each position on the object needs a unique position on the texture map. Kind of like character skinning but much more strict about reuse.

On other comments, you CAN keyframe an object space vector representation. Transforming the light position back through the mesh deformation matrix to object space gives you the correctly animated mesh. This is what jwatte pointed out. So for a light in world space you go back through inverse model matrix then inverse mesh deform (probably multiple matrices with weighted interp for this one) and you are in the same space the object normal is stored in.

The transformation chain looks something like this:

Tangent Space --> Object Space --(multimatrix interp)--> Deformed Mesh Space --> World Space --> Eye Space

Depending on where you have your light & other vector representations, and where you do your lighting calculation you need to apply the appropriate transformations to all vectors to get to the same space before the dot products are performed.

It doesn't really matter what space you have them in or what space you do your dot products in except that you want to minimize the work, minimize artifacts and have an intuitive workable data representation.

You could transform everything to tangent space or eye space and it would work, however only some things are possible without complex fragment level vector transformation, so you are restricted to the spaces where a simple texture fetch gets you the correct normal vector. i.e. tangent space or object space. The others change too much. Sure you could store multiple normal maps vectors for each deformed mesh space but you'd need to interpolate the normal colors in the fragment shader before lighting dot products using multitexture based on mesh weighting and it would be an extremely inefficient data representation with a big normal map for each keyframe. It's not how anyone would do it.

I think the confusion may arise from the way meshes have been none in the past vs the future. Object space normal representations on characters imply live skeletal deformation matrices, not explicitly hand crafted meshes with unique normals. So object space is the undeformed object mesh with identity skeletal transofrmations and that's where the high detail object space normals are computed.

For static predeformed character animation meshes as seen in some games you could store some kind of coordinate frame at the vertices in the deformed meshes as a cue to the correct object space orientation, but you need something to tell you how to transform the vectors to object space (or tangent space) even if you're not doing skeletal deformations on the fly. Anything else is just impractical.

[This message has been edited by dorbie (edited 09-23-2002).]

vincoof

09-27-2002, 01:06 AM

Thanks all for the explanations. They're very clear and very useful !

Now I see better where object-space bump-mapping is the best choice.

I used to think that "bump-mapping", as it sounds, was useful for "bumps". Thus I have taken for granted that bump-maps could only add riddles or waves or any significant irregularity over a surface. In that point of view, tangent-space is the best choice, because you can switch with any other bump texture very easily.

But bump-mapping could also be used in order to increase the "geometric effect" of a surface. Say you have a logo over a surface, the logo being a texture, and you want to "engrave" the logo into the surface. In that case, you can assume that the logo is "fixed" over the surface and consequently you can compute a specific bump texture for this surface. In that case object-space bump-mapping is better because it's less cpu-intensive (and less gpu-intensive too) and the low flexibility for the bump texture selection is not a problem because it is taken for granted that the bump-map is fixed.

Though I still don't see where world-space or eye-space bump-mapping would be really useful. They have some advantages for sure, but the disadvantages are too big IMHO.

Korval

09-27-2002, 09:20 AM

Let me make sure I fully understand the concept of object-space bump mapping.

OK, per-vertex, you must transform the light from world-space into that vertex's space. How do you do that in a skinned mesh? Do you take the inverse of the skinning matrix (or use inverse matrices from the start)?

Granted that, you then, per-pixel, take this light vector (in object space) and dot the normal from the bump map with it. So, precisely, what is the bump map? Is it just a recording of the normal at a particular vertex? This is not a trivial thing to produce, and it has to be altered if the mesh changes. How do you go about building one of these bump maps from a height map (and, of course, texture mapping from that height map onto the model)?

dorbie

09-27-2002, 11:00 PM

It is the normal of the surface in object space. It doesn't need to change if the mesh changes because you use the mesh deformation matrix to transform the light into object space for any changes that might apply to the mesh. Think of the object space normal as the absolute orientation of the surface at each point before any deformations are applied to the mesh.

I'm mot sure about your question means w.r.t. always using the inverse matrix, basically if you look at my transformation chain you move from left to right using the matrix and right to left using the inverse matrix, however transforming a point through a matrix is not the same as transforming a vector :-). So effectively you're using the other matrix anyway.

See jwatte's post on this.

[This message has been edited by dorbie (edited 09-28-2002).]

jwatte

09-28-2002, 08:21 AM

First, to get the light into object space, you transform the light by the inverse of the normal transform matrix (which would take an object-space normal into light (world) space).

As the normal matrix is the transpose of the inverse of the position matrix, to be fully correct, you have to invert that again. However, if your animation doesn't use scale or shear, the normal matrix is just the position matrix with elements 12/13/14 zeroed out, so the inverse is just the transpose of that.

Regarding generating normal maps, it's really no harder generating those in object space than generating them in tangent space. Of course, it IS harder to generate these than to just take some pre-baked map and slap it on an existing mesh. I'm aware of two ways of generating normal maps:

a) Take a heighfield-style bump map, and run a highpass filter on it to generate the normal map. The local differential in the bump map must be applied to the direction of the normal, which means that you have to know how the texture is mapped onto the object. Either this is implicit (for tangent space maps) or you have to examine the geometry to figure out which direction the normal, the S and the T coordinates point.

b) Take a low-poly version of your mesh and a high-poly version of your mesh. Shoot rays out from the low-poly version and find the closest intersection with the high-poly mesh. You can look at the distance traveled and generate a height map that way, and then run option a), or you can just look at the normal of the high-poly object at the point where the ray finds it. ATI has some nice tools and demos for this technique.

If by changing the normal map when the mesh changes you mean that, if you in your modeler change your mesh, you have to re-calculate the normal map, then that is correct. That's usually part of your export/compile/prepare/package tool chain. Tools, in general, are, IMO, the biggest part of creating a new engine these days.

tobiaso

09-29-2002, 10:02 PM

This is really an interesting discussion! I'm learning by the minute! Also I have implemented the tangent space dot3 bump mapping in my engine! I will continue work on skinned meshes and will investigate doing object space bump mapping. Pretty much tanks to you guys/gals!

A screen:

hem.fyristorg.com/tobias.ohlsson/EngineII/bump_screen.jpg

I is not that pretty, but it is a start!

How could you accomodate colored lights, multiple lights, light attenuation using bump mapping?

Do anyone know if ARB or EXT extensions is the most usual (currently I only support the EXT one). I saw a post that the ARB verstion is somewhat different on NV and ATI. Some scaling differences...

I tested my app att work (on a strange intel (brrr) gfx card. And it turned out to just suppor the ARB version...

Regards!

/hObbE

vincoof

09-29-2002, 10:54 PM

Looks cool ! We clearly see the per-pixel lighting effect, and the bumps help alot to determine the light location into the cube.

Colored lights are easily done in multipass, and can be done in single pass depending on how you use the texturing stages and, obviously, how many texture units your card support.

Multiple lights are logically done in multi pass. Though if you have a very limited number of lights (say, 2 or 3) and if your card support a high number of texture units (say, 4 at the very least) and if you want optimal performance, then sometimes you can do it in a single pass. Anyway I don't recommend to do so because it tends to make the graphics engine alot less flexible. One may say : "just keep this in mind for critical cases".

There are many techniques for light attenuation, depending on which hardware you have and what kind of attenuation you look for (linear or quadratic). And to be honest I haven't implemented any of them yet.

About extensions, that's simple : always use ARB if possible.

Otherwise use whatever you like. EXT once was a kind of standard amongst extensions, but today only ARB is.

If you use other extensions, you just have to keep in mind that it's not meant to be supported by all hardware, even though some (rare) vendor-specific extensions are widely supported and some of them even became part of a later OpenGL A-Spec (I'm especially thinking of the NV_blend_square extension that is even supported by ATI hardware, and is now included into OpenGL1.4 specifications).

Powered by vBulletin® Version 4.2.5 Copyright © 2018 vBulletin Solutions Inc. All rights reserved.