View Full Version : Object space bump mapping

11-11-2002, 09:38 AM
I'm glad to see someone has finally posted an example of object space bump mapping with skeletal deformation (opengl.org story). This seem like a much more attractive option to me for detail reconstruction of objects. Does anyone have any thoughts on the disadvantages of this approach vs tangent space normal maps?

Tiling wouldn't be possible over surfaces that change orientation (not needed for detail preservation) but I'd expect symmetry to still be exploitable.

[This message has been edited by dorbie (edited 11-11-2002).]

11-11-2002, 10:16 AM
i like that approach, looks much more stable to me than tangentspace, as there you have to teach your artist tons of constraints he can't really prove... this approach lets every model get a map somehow.. can't wait to get home to try some stuff out..

i remember i've done this years ago quite soon on my gf2mx, just without the bumpmap actually. and i thought hey, we could do that including bumpmapping. then i thought geeze, i have to get the object space normals.. how to get them? i don't know..

thanks ati for the normalmapper tool http://www.opengl.org/discussion_boards/ubb/biggrin.gif saves a lot of troubles http://www.opengl.org/discussion_boards/ubb/biggrin.gif hope it works, we'll see during the next days http://www.opengl.org/discussion_boards/ubb/biggrin.gif

11-11-2002, 10:44 AM
hm.. the ati normal mapper is opensource.. anyone knowing the specs for 3dsmax materials or so, so we can trace the color, glossiness, specularity etc of each point in the highresmodel to store that in the map of the lowresmodel as well? so we could let the artists do really highres models and then just map them completely onto the lowresmodel..

hm.. and then we could calculate for each texel on the lowresmap the horizon, for horizon mapping.. we would get then for free (just some math perpixel of the rendered model, no additional fillrate) selfshadowing, soft (too soft possibly, but who cares? it looks sweet)

then we could move over to indexed shadowmapping or something like this, hehe http://www.opengl.org/discussion_boards/ubb/biggrin.gif at least we could drop the fillrate intensive shadowvolumes, and take the soft shadowmaps instead, with no care about artefacts as we don't need the shadow comparison to start near.. there we have proper selfshadowing due horizonmaps..

some inspiration from a crazy brain http://www.opengl.org/discussion_boards/ubb/biggrin.gif

11-11-2002, 11:17 AM
the major disadvantage is like u say u need uniquemapping and it must be 100% correct this is very time consuming to do (esp if the person that does the texturemapping is yourself like me http://www.opengl.org/discussion_boards/ubb/smile.gif )
check a IOTD from www.flipcode.com (http://www.flipcode.com) from about a month ago from charles bloom, theres a link there to a paper from some sort of automatic unwrapping method.
also as each polygon must have its own unqiue texture the quality usually is much lower eg for a person u will need to map him/her out into the whole 512x512 sized texture and not mirrormap (which results in twice the image quality).
personally ive always using object space mapping for everything since 1999 (WRT diffuse/decal textures)even on my tnt1.
it is the future, we only share textures etc cause A/ the artists are lazy http://www.opengl.org/discussion_boards/ubb/smile.gif B/the hardware cant handle so many

11-11-2002, 12:10 PM
well, mirroring is not much of a problem, just mirror the whole mesh http://www.opengl.org/discussion_boards/ubb/biggrin.gif (check the ati normal mapper demonstration model, it is only the half model.. even saves space on modelside http://www.opengl.org/discussion_boards/ubb/biggrin.gif)
(i mean, if texture is mirrorable, then the model has to be symetric as well (in base position)

11-11-2002, 01:04 PM
OK so seems like there is a general belief that this is better, and I agree. You say this is time consuming, and I'm really just considering it as a better approach for object simplification with detail preservation. That should be no more time consuming than the tangent space approach since it's auto generated.

Although when I think about it the Doom 3 dual normal & bump map would be a pain to implement. You couldn't add the vector as easily because one is in tangent space (and would always be) and the other is in object space. It could still be done I think by transforming the tangent space vector into object space and making some simple (and I think valid) assumptions about axis alignment of tangent and binormal, before adding the components and renormalizing. It's sure more complex though.

[This message has been edited by dorbie (edited 11-11-2002).]

11-11-2002, 01:46 PM
Hmm, I like the idea of not having to transform the light vector per vertex, but is it really worth the cost of having to throw out tilable normal maps (at least for non-planar surfaces)? How much video memory do you guys have on your cards? Mine seem to be limited to 128 Megs http://www.opengl.org/discussion_boards/ubb/wink.gif

-- Zeno

11-11-2002, 02:20 PM
Again, this is mainly about detail preserving simplification as far as I'm concerned. I think there's room for both. I think there's less ambiguity with the object space rep, it seems like it would be much better behaved in general, but each to their own.

11-11-2002, 05:45 PM
Oooooooh! That's the kind of normal map generator I was going to write for LightWave3D[7]. I knew there was a difference between object space and tangent space...I just didn't know what yet.

I would definitely prefer object space bump-mapping over tangent space when texturing characters(this is what I'm into http://www.opengl.org/discussion_boards/ubb/smile.gif). It would be so much faster!

11-11-2002, 05:55 PM
Originally posted by dorbie:
OK so seems like there is a general belief that this is better, and I agree. You say this is time consuming, and I'm really just considering it as a better approach for object simplification with detail preservation. That should be no more time consuming than the tangent space approach since it's auto generated.

Although when I think about it the Doom 3 dual normal & bump map would be a pain to implement. You couldn't add the vector as easily because one is in tangent space (and would always be) and the other is in object space. It could still be done I think by transforming the tangent space vector into object space and making some simple (and I think valid) assumptions about axis alignment of tangent and binormal, before adding the components and renormalizing. It's sure more complex though.

[This message has been edited by dorbie (edited 11-11-2002).]

the concept of 'adding' together 2 normals is easy to grasp but personally i found actually doing it very difficult (it has to be done perpixel each pixel has to be 'reorientated')
but then again im terrible in maths and am sure most of you wont have difficulties http://www.opengl.org/discussion_boards/ubb/smile.gif

11-11-2002, 06:01 PM
I just read some of that article. I didn't think of the reverse effect it might have with skinning. From what I understand and what zed just said, I think it would be faster for skinned meshes to deal with tangent space, but for static objects object space would be ideal.

"Why ya have to go and make things so complicated"


11-11-2002, 07:52 PM
I think it depends which space you decide to do your fragment calculations in. You have a choice ;-)

11-12-2002, 12:33 AM
i think they are perfect for anything in an engine you have derived from some sort of RigidBody.. http://www.opengl.org/discussion_boards/ubb/biggrin.gif

at least, i'll use it for my spaceships.. and the other vehicles, etc..

11-12-2002, 03:57 AM
i like that approach, looks much more stable to me than tangentspace, as there you have to teach your artist tons of constraints he can't really prove...

Uh, I dunno much about this so maybe I should stfu/rtfa, but wasn't tangent space invented to get rid of the object-space constraint that you need to be careful not to rotate, aniso-scale etc the height/normal map ?
Ofcourse if it's autogenerated you can make sure that "V is up", but then you're not explaining anything to artists anyhow http://www.opengl.org/discussion_boards/ubb/smile.gif
Not trying to sound defensive -- I just had the impression using tangent was a must for flexibility; but I didn't use it yet http://www.opengl.org/discussion_boards/ubb/smile.gif.
And what zeno said http://www.opengl.org/discussion_boards/ubb/smile.gif

11-12-2002, 04:06 AM
Again you're thinking of texture as applied to an object as some sort of texturing process rather than the detail generated from the object as part of simplification. With a vector map I don't think you have aniso mapping & scaling issues as you suggest but it depends on your intent.

Tangent space makes sense when applying a texture as wallpaper (Blinn invented bump mapping WITH a tangent space formulation in one go, there was no itteration), but object space seems to be more attractive for detail preservation.

As for consistency I'm talking about the interpolation of the coordinate frame between vertices in a tangent space impelementation and using that as the base for the normal map that is not present with an object space representation. Ofcourse when you deform the mesh this begins to kick in but it seems like a MUCH better starting point to me and as I said, better behaved.

11-12-2002, 05:15 AM
Sorry for my idiocy, but what do you mean by 'better behaved' dorbie?


11-12-2002, 06:03 AM
Less prone to artifacts and interpolation approximations, due to vector lerp.

[This message has been edited by dorbie (edited 11-12-2002).]

11-12-2002, 07:46 AM
The main reasons to go with object space or texture space normals has to do with reuse. Texture space lets you decouple the normals from the object. So the texture is tileable/repeatable and can be applied to different objects. Object space is just that, tied to the object. When the object moves the normal map must be "updated" to account for the change of the object. In the article/demo you guys are talking about the observation is made (and one that is generally acceptable to do) is instead of updated the normals to match the object. Update the light to match the opposite of the object. If it wasn't clear when I am talking about changes and opposite I am referring to transforms. For rigid objects this is a rather nifty idea. For the skeleton objects it starts to become a pain. Since you now have multiple transforms (1 for each bone, or possibly more with animation blending). Now you must reverse the transform for each bone, update the lightvector accordingly and weight the lightvector. Now the cpu is really starting to get involved, Since Vertex Programs STILL DON'T TAKE ARBITRARY SETS OF DATA!!!!!! They are still 1 vertex in and one vertex out. Sure you can get more than one vertex over the bus by sticking somewhere else but its still weird. The number of operations to skin a mesh is a lot (like 90 something) without lighting. And of course you have to stick all the damn matrices somewhere. In general everything is a big mess.

My final thoughts about object space. Take different objects and try to use the same texture. Sure you can, but the normal texture is different for each one. Sure the combiners have less work but who cares. THe one thing I am glad to see is the increase in the number of combiners and the general increase in speed. I still wish you could use more than 4 textures on a geforce 4 but heck the 8 combiners are nice. For single objects repeated multiple times object space doesn't work. Sure you can use the same texture but you still have to mangle the transformed light vectors per object. But then again you have to do the same for the light vector to get it into tangent space.



11-12-2002, 08:03 AM
The point about reuse has already been made in the first post.

Animating an object with object space representation is largely similar to animating an object with tangent space mapping. The normal map does not need to be recreated any more than the tangent space map under deformation needs to be. Again it depends on your assumptions about the space you'll be performing the lighting in. There is also a deformed space between object and tangent transformations I described some weeks ago in a post on opengl.org.

Of course for rigid body stuff it's a complete no brainer, object space is very nice and eliminates the per vertex vector coordinate transform to tangent space.

You have to realize that this stuff is really nasty when you have significant changes in the coordinate frame between vertices. The more you simplify the worse the interpolation errors get, maybe you could back some error correction into the tangent space normal map but my head hurts just thinking about it. Object space maps just avoid all of that. You're only left to worry about the local effect approximations of light & view vectors.

[This message has been edited by dorbie (edited 11-12-2002).]

11-12-2002, 11:25 AM
I wrote some plugins for 3dsmax long time ago. They aren't completely finished, because I discontinued the development, but they are functional and pretty easy to use. If there seems to be enough interest, I will probably finish them and release the code.

One of the plugin generates normalmaps for walls and the other one does something like the ATI plugin: traces rays in the direction of the normals of a low resolution surface and captures the detail of the high resolution object. You can extract object and tangent space normals and also colors. I'm planning to extract also displacements withing an n-patch surface, but as I said by now, I've stopped the development.

You can checkout the plugins and a simple viewer here:

11-12-2002, 11:56 AM
Wow, those last two images are pretty cool. They really show the detail a bumpmap can show http://www.opengl.org/discussion_boards/ubb/biggrin.gif.

If I had 3dstudio I'd dl them in a flash...but I only have LightWave3D[7]. I'm gonna write myself an object-space normal map when I finish my mesh to tri-strips algorithm. Gah, I don't want to use my brain just yet though...I'm gonna go play with some register combiners and add simple stuff to Spider3D http://www.opengl.org/discussion_boards/ubb/wink.gif.

btw, castano, I know how people like to be complimented on their work, so I will contribute right now. Your Titan engine was pretty cool. I never really used it, BUT it taught me a few things about OpenGL that I needed to know back then.

11-12-2002, 12:55 PM
Whatever, thanks for the compliments, writting the engine was also a great learning exercise. :-)

The plugin was originally an external command line tool, I integrated it in max for ease of use, but will write a standalone application again if there is enough interest.

By now, you can at least test the BumpViewer to see the results for yourself. I'd appreciate any kind of feedback and bug reports.

Talking about the topic again. One of the things you have to take care of when using tangent space bumpmaps is that the tangent space you use for rendering, must be identical to the tangent space used to generate the normalmap. For this reason you cannot use any kind of LOD in conjunction with tangent space bumpmapping if you want to avoid artifacts. On BumpCharm I used nvidia's meshmender, so anybody can obtain the same tangent space basis with ease. You can also extract object space normals thought.

11-12-2002, 06:45 PM
We had a very long thread about this, where we even sketched out what the various shaders would look like, a few months back.

I started out in the object space camp, because I like the shorter shaders, but I came to appreciate the memory savings of tangent space through the course of the investigation. Regular lighting calculation is so much cheaper in object space, though; saves a lot of fragment program ops.

Note that reflection mapping on bump maps is also much cheaper in object space, as you have eye and normal right there; no need to generate a reflection in tangent space and then un-skin that reflection to world texture space.

11-12-2002, 10:52 PM
dorbie: You can add bumpmaps and objspc-nmaps by computing the tangent space normal for the bumpmap and translating it to object space. I did a similar approach in my plugin.

Object space normal maps are usefull for objects and characters, but not for level geometry (unless you go for unique texturing), so you will have to code two diferent paths for each target. That's why I keep it simple and just use tangent space bm.

Two different paths mean two different shaders, more state changes, etc. But maybe the benefits are worth the effort.

11-13-2002, 01:26 AM
Castano, yes, but I think it's equivalent to what I said. I'd hate to think you'd need to reintroduce the coordinate frame per vertex to combine the images. Ideally this would be an image operation only without reference to the geometry, but whatever works. I think an image only operation is possible where tangent space is defined by the normal in the vector map and the the ordered rotation of the implied texture axes around into the frame of that normal.

And it IS useful for some types of geometry. You're talking about a transform per vertex, compared to a transform per object for rigid body stuff. Look at all the stuff in doom3 for example. A lot of it is baked and non repeating (doors and other items), I can do that in object space and still reuse by doing the vector transforms to object space and having an object coordinate frame. I think there are a lot of general assumptions getting made where people aren't trying to solve the object space problems so it doesn't get a fair shake. This transform once per object to reuse object space vectors is a good example. You still save on per vertex transform work.

[This message has been edited by dorbie (edited 11-13-2002).]

11-13-2002, 03:39 AM
Yeah, it's definitely not an image based operation. I combine bumpmaps during extraction for that reason.

There're still some issues when combining bumpmaps the way you want. That was discussed in the Algorithm ML some time ago: Someone said that he had lighting discontinuities on the texture discontinuities, even when the bumpmaps matched correctly on the edges. That happens because the tangent space basis on both sides of the edge are different. Both basis have the same normal, but the tangents are not the same.

If the normalmap on the edge is (0,0,1) there won't be any error, because the transformed vector will be the same on both sides of the edge, but if it points in another direction, you will have lighting discontinuities.

When combining a normalmap with a bumpmap just by adding and renormalizing you will suffer from the same artifacts. If those artifacts are noticeable enough or not, depends on your case, but in my case it was.

And yes, object space is cheaper, I'm not discussing that, I just said that in some cases you need tangent space, and if you do, you will need two different code paths.

11-13-2002, 07:52 AM
Looks like there's a bug in the list, the thread has been forked.

Yep two code paths seems right.

As for combining the bump maps you need to do a coordinate frame interpolation per texel to get it right if you're working in tangent space, but I still think it can be an imaging operation done post extraction. The normal defines your coordinate frame in a DOT3 map, and I think you can compute 'local' tangent space tangent and binormal vectors using just that normal given that you are in texture space.

[This message has been edited by dorbie (edited 11-13-2002).]

11-13-2002, 09:07 AM
Ok let me see if I can get this straight. So what you're saying is that the correct way to combine an object space bumpmap generated from some polybump tool and a tangent space detail normal map is for each texel in the object space map, compute a tangent basis for each texel and transform each texel in the detail tangent space map by that inorder to have both maps in object space? Or am I just way the hell off? http://www.opengl.org/discussion_boards/ubb/biggrin.gif


11-13-2002, 10:36 AM
You're exactly right.

P.S. in the doom3 thread someone suggested that this is also better for tangent space normal maps, and I think that strictly speaking they're right. The accurate combination is to rotate the bump map to the perturbed tangent space per texel.

Thinking about this you then SUBSTITUTE the vectors instead of adding. Ahh.. it suddenly seems like absolutely the right thing to do.


Compute bump vector using derivitives, transform to local tangent space per normal, substitute the new transformed vector.

[This message has been edited by dorbie (edited 11-13-2002).]

11-13-2002, 12:53 PM
You're exactly right.

w00t! http://www.opengl.org/discussion_boards/ubb/biggrin.gif


11-13-2002, 01:04 PM
That way of combining bumpmaps is something like what I explained here:

And here is a more detailed explanation of the artifacts I was talking about:

11-13-2002, 06:18 PM
perhaps im not understanding the 'edge artifacts'
if its what i think u mean i have seen it before but this was easily solved by 'expanding' the edges of the normalmaps like so

for ( x=0; x<width; x++ )
for ( y=0; y<height; y++ )
if ( data[x*width+y].x == 0 && data[x*width+y].y == 0 && data[x*width+y].z == 0 )
int num=0;
VECTOR tot(0,0,0);
if ( is_vector_NULL( data[((x+1+W)%W)*width +y] ) )
tot += data[((x+1+W)%W)*width +y]; num++;
if ( num )
tot /= num;
edge_data[x*width+y] = tot;
edge_data[x*width+y] = data[x*width+y];

this expands them by one pixel which removes the worse of the artifacts (u will still get them with mipmaps) but the 'edges' can be enlarged further to prevent this.
those are the only artifacts ive seen, then again perhaps ive got these 'artifacts' and havent seen them yet (cause im using nice models or something)
do u have any screenshots of the problem

11-14-2002, 12:24 AM
zed: no that's not the artifact I'm talking about. What you are doing can be done much better with an EDT (Euclidean distance transform) The 4-pass lienar aproximation isn't very difficult to code, and gives really good results.

Have you never seen a bumpmap that perfectly fits on the edges, but that under some lighting conditions produces small discontinuities?
It's something pretty common, because most people do it wrong, but I will look for an example, to show you the difference.

11-14-2002, 09:53 AM
>>Have you never seen a bumpmap that perfectly fits on the edges, but that under some lighting conditions produces small discontinuities?<<

no admitly though all the models ive done have been ones ive made + thus have been very simple (does this matter?)
i wouldnt mind have a go at a more complicated model they do exist on the net eg www.3dcafe.com (http://www.3dcafe.com) but unfortunaeltly are never uniqly texturemapped (+ bugger me if im going unwrap them myself)

then again perhaps i are seeing the problem but didnt realise it is a problem (im more of a person that if it looks ok, then finished onto the next problem)

11-14-2002, 10:25 AM
kind of rotating + accumulating + renormalizing of several normal maps from different tangent spaces on single model: