Object space bump mapping

I’m glad to see someone has finally posted an example of object space bump mapping with skeletal deformation (opengl.org story). This seem like a much more attractive option to me for detail reconstruction of objects. Does anyone have any thoughts on the disadvantages of this approach vs tangent space normal maps?

Tiling wouldn’t be possible over surfaces that change orientation (not needed for detail preservation) but I’d expect symmetry to still be exploitable.

[This message has been edited by dorbie (edited 11-11-2002).]

i like that approach, looks much more stable to me than tangentspace, as there you have to teach your artist tons of constraints he can’t really prove… this approach lets every model get a map somehow… can’t wait to get home to try some stuff out…

i remember i’ve done this years ago quite soon on my gf2mx, just without the bumpmap actually. and i thought hey, we could do that including bumpmapping. then i thought geeze, i have to get the object space normals… how to get them? i don’t know…

thanks ati for the normalmapper tool saves a lot of troubles hope it works, we’ll see during the next days

hm… the ati normal mapper is opensource… anyone knowing the specs for 3dsmax materials or so, so we can trace the color, glossiness, specularity etc of each point in the highresmodel to store that in the map of the lowresmodel as well? so we could let the artists do really highres models and then just map them completely onto the lowresmodel…

hm… and then we could calculate for each texel on the lowresmap the horizon, for horizon mapping… we would get then for free (just some math perpixel of the rendered model, no additional fillrate) selfshadowing, soft (too soft possibly, but who cares? it looks sweet)

then we could move over to indexed shadowmapping or something like this, hehe at least we could drop the fillrate intensive shadowvolumes, and take the soft shadowmaps instead, with no care about artefacts as we don’t need the shadow comparison to start near… there we have proper selfshadowing due horizonmaps…

some inspiration from a crazy brain

the major disadvantage is like u say u need uniquemapping and it must be 100% correct this is very time consuming to do (esp if the person that does the texturemapping is yourself like me )
check a IOTD from www.flipcode.com from about a month ago from charles bloom, theres a link there to a paper from some sort of automatic unwrapping method.
also as each polygon must have its own unqiue texture the quality usually is much lower eg for a person u will need to map him/her out into the whole 512x512 sized texture and not mirrormap (which results in twice the image quality).
personally ive always using object space mapping for everything since 1999 (WRT diffuse/decal textures)even on my tnt1.
it is the future, we only share textures etc cause A/ the artists are lazy B/the hardware cant handle so many

well, mirroring is not much of a problem, just mirror the whole mesh (check the ati normal mapper demonstration model, it is only the half model… even saves space on modelside )
(i mean, if texture is mirrorable, then the model has to be symetric as well (in base position)

OK so seems like there is a general belief that this is better, and I agree. You say this is time consuming, and I’m really just considering it as a better approach for object simplification with detail preservation. That should be no more time consuming than the tangent space approach since it’s auto generated.

Although when I think about it the Doom 3 dual normal & bump map would be a pain to implement. You couldn’t add the vector as easily because one is in tangent space (and would always be) and the other is in object space. It could still be done I think by transforming the tangent space vector into object space and making some simple (and I think valid) assumptions about axis alignment of tangent and binormal, before adding the components and renormalizing. It’s sure more complex though.

[This message has been edited by dorbie (edited 11-11-2002).]

Hmm, I like the idea of not having to transform the light vector per vertex, but is it really worth the cost of having to throw out tilable normal maps (at least for non-planar surfaces)? How much video memory do you guys have on your cards? Mine seem to be limited to 128 Megs

– Zeno

Again, this is mainly about detail preserving simplification as far as I’m concerned. I think there’s room for both. I think there’s less ambiguity with the object space rep, it seems like it would be much better behaved in general, but each to their own.

Oooooooh! That’s the kind of normal map generator I was going to write for LightWave3D[7]. I knew there was a difference between object space and tangent space…I just didn’t know what yet.

I would definitely prefer object space bump-mapping over tangent space when texturing characters(this is what I’m into ). It would be so much faster!

Originally posted by dorbie:
[b]OK so seems like there is a general belief that this is better, and I agree. You say this is time consuming, and I’m really just considering it as a better approach for object simplification with detail preservation. That should be no more time consuming than the tangent space approach since it’s auto generated.

Although when I think about it the Doom 3 dual normal & bump map would be a pain to implement. You couldn’t add the vector as easily because one is in tangent space (and would always be) and the other is in object space. It could still be done I think by transforming the tangent space vector into object space and making some simple (and I think valid) assumptions about axis alignment of tangent and binormal, before adding the components and renormalizing. It’s sure more complex though.

[This message has been edited by dorbie (edited 11-11-2002).][/b]

the concept of ‘adding’ together 2 normals is easy to grasp but personally i found actually doing it very difficult (it has to be done perpixel each pixel has to be ‘reorientated’)
but then again im terrible in maths and am sure most of you wont have difficulties

I just read some of that article. I didn’t think of the reverse effect it might have with skinning. From what I understand and what zed just said, I think it would be faster for skinned meshes to deal with tangent space, but for static objects object space would be ideal.

“Why ya have to go and make things so complicated”

I think it depends which space you decide to do your fragment calculations in. You have a choice :wink:

i think they are perfect for anything in an engine you have derived from some sort of RigidBody…

at least, i’ll use it for my spaceships… and the other vehicles, etc…

i like that approach, looks much more stable to me than tangentspace, as there you have to teach your artist tons of constraints he can’t really prove…

Uh, I dunno much about this so maybe I should stfu/rtfa, but wasn’t tangent space invented to get rid of the object-space constraint that you need to be careful not to rotate, aniso-scale etc the height/normal map ?
Ofcourse if it’s autogenerated you can make sure that “V is up”, but then you’re not explaining anything to artists anyhow
Not trying to sound defensive – I just had the impression using tangent was a must for flexibility; but I didn’t use it yet .
And what zeno said

Again you’re thinking of texture as applied to an object as some sort of texturing process rather than the detail generated from the object as part of simplification. With a vector map I don’t think you have aniso mapping & scaling issues as you suggest but it depends on your intent.

Tangent space makes sense when applying a texture as wallpaper (Blinn invented bump mapping WITH a tangent space formulation in one go, there was no itteration), but object space seems to be more attractive for detail preservation.

As for consistency I’m talking about the interpolation of the coordinate frame between vertices in a tangent space impelementation and using that as the base for the normal map that is not present with an object space representation. Ofcourse when you deform the mesh this begins to kick in but it seems like a MUCH better starting point to me and as I said, better behaved.

Sorry for my idiocy, but what do you mean by ‘better behaved’ dorbie?

-Mezz

Less prone to artifacts and interpolation approximations, due to vector lerp.

[This message has been edited by dorbie (edited 11-12-2002).]

The main reasons to go with object space or texture space normals has to do with reuse. Texture space lets you decouple the normals from the object. So the texture is tileable/repeatable and can be applied to different objects. Object space is just that, tied to the object. When the object moves the normal map must be “updated” to account for the change of the object. In the article/demo you guys are talking about the observation is made (and one that is generally acceptable to do) is instead of updated the normals to match the object. Update the light to match the opposite of the object. If it wasn’t clear when I am talking about changes and opposite I am referring to transforms. For rigid objects this is a rather nifty idea. For the skeleton objects it starts to become a pain. Since you now have multiple transforms (1 for each bone, or possibly more with animation blending). Now you must reverse the transform for each bone, update the lightvector accordingly and weight the lightvector. Now the cpu is really starting to get involved, Since Vertex Programs STILL DON’T TAKE ARBITRARY SETS OF DATA!!! They are still 1 vertex in and one vertex out. Sure you can get more than one vertex over the bus by sticking somewhere else but its still weird. The number of operations to skin a mesh is a lot (like 90 something) without lighting. And of course you have to stick all the damn matrices somewhere. In general everything is a big mess.

My final thoughts about object space. Take different objects and try to use the same texture. Sure you can, but the normal texture is different for each one. Sure the combiners have less work but who cares. THe one thing I am glad to see is the increase in the number of combiners and the general increase in speed. I still wish you could use more than 4 textures on a geforce 4 but heck the 8 combiners are nice. For single objects repeated multiple times object space doesn’t work. Sure you can use the same texture but you still have to mangle the transformed light vectors per object. But then again you have to do the same for the light vector to get it into tangent space.

Bah.

Devulon

The point about reuse has already been made in the first post.

Animating an object with object space representation is largely similar to animating an object with tangent space mapping. The normal map does not need to be recreated any more than the tangent space map under deformation needs to be. Again it depends on your assumptions about the space you’ll be performing the lighting in. There is also a deformed space between object and tangent transformations I described some weeks ago in a post on opengl.org.

Of course for rigid body stuff it’s a complete no brainer, object space is very nice and eliminates the per vertex vector coordinate transform to tangent space.

You have to realize that this stuff is really nasty when you have significant changes in the coordinate frame between vertices. The more you simplify the worse the interpolation errors get, maybe you could back some error correction into the tangent space normal map but my head hurts just thinking about it. Object space maps just avoid all of that. You’re only left to worry about the local effect approximations of light & view vectors.

[This message has been edited by dorbie (edited 11-12-2002).]

I wrote some plugins for 3dsmax long time ago. They aren’t completely finished, because I discontinued the development, but they are functional and pretty easy to use. If there seems to be enough interest, I will probably finish them and release the code.

One of the plugin generates normalmaps for walls and the other one does something like the ATI plugin: traces rays in the direction of the normals of a low resolution surface and captures the detail of the high resolution object. You can extract object and tangent space normals and also colors. I’m planning to extract also displacements withing an n-patch surface, but as I said by now, I’ve stopped the development.

You can checkout the plugins and a simple viewer here:
http://talika.eii.us.es/~titan/magica/