Bump Mapping in world space

All the static geometry that has to be bump mapped could have normal maps encoded in world space.
This is valid as long as no 2 objects use the same normal map, and as long as the normal map is not tiling.

How can you compute your world space normal (the problem is that you have to find for each pixel of the tangent space normal map which face is referencing it, and pass it in world space …) ?

Did anyone ever experienced this ?

Thanks,
SeskaPeel.

That’s what you have to do. But why would you? To simplify the matrix work? You still have to transform the view & light vector to texture space so it’s pretty restrictive considering (deliberately avoided tangent space here). I can see the value of object space maps and I think they can be better than the tangent space equivalents especially for poly reduced models and they can still be deformed. But I don’t see what extra you have to do for your world space map. It’s an object space representation with an identity model matrix and that only really affects the vertex program, if you even bother with the difference which would be world space view and light vectors straight through vs model matrix mult.

I think you’re really looking for object space bump mapping. It was discussed here after an article on the front page. If you search for the link you’ll find it. The simple answer is do nothing, just pass your vectors right on through and don’t even bother with tangent and binormal vectors (required only with deformation in an object or world space scenario) If you add a model matrix multiply of the vectors on the way through the vertex program you can reuse the bump map on any number of models under matrix transformation.

Yes, yes, object space and world space are the same for static geometry. I was thinking of object space actually.

But it doesn’t answer my question, how can I build those object space normal maps ? Is the only way to do this as I first said (finding back for each pixel wich poly uses it) ?

SeskaPeel.

For each texel, not for each pixel, and this is how tangent space maps are built too not just object space maps.

You generate a skin texture and for each texel raycast back to the complex model from the simple model to figure out what the normal is. If you have a tangent space map you want to formulate as an object space map you would generate a skin texture for the model and for each skin texel, locate the bump map location and read the vector then transform the original bump map vector from tangent space to world space and that requires binormals and tangent vectors because you need to know the coodrinate frame for that transformation.

Which you need depends on thether you’re doing bump preservation or more traditional bump mapping. Ironically bump preservation would be easier to implement in this case.

Another plus for object space mapping, is that you don’t have to store a 3x3 matrix a each vertex.

I’ve given thoughts to it (object space bump-mapping) recently, and if you can, i see no good reason not to use it. In theory it should be faster than tangent-space bump-mapping.

I can see two problems however:

  • can’t morph the geometry, as you cannot update the object-space normals in real-time. So it wouldn’t suit very well things like skeletal deformations on characters. On the other hand, characters assembled from a set of objects (like arm, head, torso…) each with a transform matrix should still work well.
  • it somewhat implies that the textures encoding the per-pixel normals in object space have to be unique for all the triangles of the object. Things like repeating patterns, sharing a bump-map area, or mirroring are no longer possible.

On the other hand, no need to send the tangent-space matrix, don’t have to worry about the robustness of the tangent vectors; you just have to transform the light to object space, and set it as a constant.

Did anybody implement it and has some thoughts to share?

Y.

i think its perfect for rigid bodies, like bulltes, powerups, guns, items, and similar stuff, particlesystem-particles, etc.

those objects normally don’t need repeating textures, don’t need any morphing, and all they need is to be fast.

i think its perfect for rigid bodies.

I think the article mentioned here before had some solution to animated meshes. Some other thoughts not mentioned here before:

-Object-space bump mapping gives much better quality if you don’t normalize the light vector. This is because the light vector remains relatively constant in object space (unless the light is inside the object), but in tangent space it changes a lot even over single polygons.

-Certain specular solutions rely on tangent space and don’t work in object-space. This is the main reason I’m sticking with tangent space at the moment.

-Ilkka

I remember that solution/demo, it was published here, I dunno if it was the front page or just a post. The thing is the guy was deforming an object space mesh so it was more simlilar to tangent space implementations than it was different because he had to deform the coordinate frame based on deformations anyway and therefore transform the vectors through the frame in the vertex program. A rigid body would just require vector xform through the inverse modelview (starting with light position in eye space) once per object.

Specular solutions relying on tangent space? I’d have thought your half vectors and view vectors would also be better behaved.

[This message has been edited by dorbie (edited 08-22-2003).]

Well I mess up my normalization cubemap to imitate the specular exponent, this way I can get an approximation of any exponent. Pretty much a hack, but I like the results.

-Ilkka

You’re free to do per-fragment lighting in any legitimate space.

Tangent-space normalmaps are most efficient for many applications, but trying to get a smooth basis can be annoying. Even if you store your normals in tangent-space, you may need to transform them into another space to perform lighting calculations (e.g. cube map bumped reflection mapping).

Object-space normalmaps have the nice property of not being dependent on tessellation. They can also be deformed
with skinning techniques just like with
skinning and per-vertex lighting.

I can’t think of any great advantage to world-space normalmaps though, unless the
cost of transforming the light into object space whenever it moves w.r.t. the object is objectionable.

Thanks -
Cass

by JustHanging :
“…I mess up my normalization cubemap to imitate the specular exponent, this way I can get an approximation of any exponent…”

Could you explain how do you do this?

Originally posted by DarkWIng:
[b]by JustHanging :
“…I mess up my normalization cubemap to imitate the specular exponent, this way I can get an approximation of any exponent…”

Could you explain how do you do this?[/b]

I think what JustHanging means is that you can encode any function of direction into a cube map.

One common thing to do is encode

f(x,y,z) == normalize(x,y,z),

but you could also encode

f(x,y,z) == pow(dot(normalize(x,y,z), H), shininess),

as long as H and shininess are constant in the space you’re using (usually world-space
or light-space).

This is all true, even if it’s not how JustHanging uses it.

Cass

Please answer me on one question. I used glDrawElement for bumpmap and this works. But when I use glDrawArray, my model looks as if I use normals of polygons rather then vertex normals. One word this looks as if I have FLAT shading rather then SMOOT. But me it is necessary to use glDrawArray for this, how I solve this problem?

Sorry for my English.

P.S. If you want see our work you may call at on www.bev-team.com

Cass has the right idea. However, I do it slightly differently to allow bump mapping with low-end HW. Instead of normalize(x, y, z) I store normalize(-x, -y, 1-z)*sharpness/k in the cubemap. Sharpness is a value representing the sharpness of a highlight. When I render specular, the specular intensity is calculated by something like

spec=1-((cubemap at halfvector) dot normalmap)*k

K is 1 for materials with low exponent, but for sharper highlights I use 2 or 4 to avoid some cubemap clamping problems.

Now you should see why I call this a hack Here’s how it looks: http://www.hut.fi/~ikuusela/images/goodspecular.jpg

-Ilkka

To follow the discussion, I found this page : http://www.geocrawler.com/archives/3/4856/2002/3/0/8224179/

It talks about one or two things that are not mentionned in here.

I’m still stuck with how to “transform in the fragment program a tangent space normal back to object space” (for cube map bumped reflection mapping, using tangent space normal maps, as Cass guessed)

SeskaPeel.

I just replied to the same question in another thread.

If you have the tangent space matrix in tm[0-3] then you do this to transform light into tangent space:

DP3 tlite.x, lite, tm[0];
DP3 tlite.y, lite, tm[1];
DP3 tlite.z, lite, tm[2];

However, you can just as well transform the normal from tangent space to object space using the same matrix:

MUL norm.xyz, tnorm, tm[0];
MADD norm.xyz, tnorm, tm[1], norm;
MADD norm.xyz, tnorm, tm[2], norm;

Hope I got the syntax mostly right :slight_smile:

Thanks for your two answers that make things quite clear.

SeskaPeel.