Bump mapping

Hi!

Is it true that you can’t repeat a bump map on a large polygon using dot3 bump mapping and register combiners? I think I’ve read that in some document and in my own app that also doesn’t work. What will you do if you have a large wall in your map where the brick texture is repeated several times and you want your bumpmap to fit this brick-texture? Is there a way to get this working?

Thanks
LaBasX2

should work nice if your u and v direction of the map are perpenticular in worldspace and u use a normalizationcubemap for renormalizing the point_to_light vector per pixel…

Originally posted by davepermen:
should work nice if your u and v direction of the map are perpenticular in worldspace and u use a normalizationcubemap for renormalizing the point_to_light vector per pixel…

If it is just a wall, it would be better to use the object space bumpmapping.
No normalization cube map, no tangent-space light vectors.
And no problem.
Alexei.


Is it true that you can’t repeat a bump map on a large polygon using dot3 bump mapping and register combiners?

No.


If it is just a wall, it would be better to use the object space bumpmapping.
No normalization cube map, no tangent-space light vectors.

…and have unique normal map for each wall.
and recompute normal map each time when nonstatic geometry changes or moves.And you still need cube map for large triangles and closer light,object-space light vectors will become unnormalized as well as tangent-space light vectors.


And no problem.

For me this method is one big problem.
Am I wrong ?

[This message has been edited by Michail Bespalov (edited 04-19-2001).]

Michail, you are right.
However, I didn’t say that the method is better than the tangent space bumpmapping.
I said “if it is just a wall,…”. Ok, in other words, it depends on the particular
requirements of your application. Yes, you should have a unique normal map for each
object. But a few different bricks… And I believe a wall is usually a static object
so there’s no need of recomputing it.
I also agree there are problems with closer lights. And what if you don’t use them?
And how about speed you can gain by using object space bump mapping with fewer passes
than in the more robust one?
So it depends…
Alexei.

Thanks for your help!

I’ve found a bug in my code. Repeating bumpmaps really works but of coarse you have to set the texture parameters to GL_REPEAT while the distance attenuation needs it set to GL_CLAMP. That was the thing I was missing.


And how about speed you can gain by using object space bump mapping with fewer passes
than in the more robust one?

All that you can do in object space you
can do in tangent space with the same number
of passes.Number of passes doesn’t depend on space.The only difference I can see is that in case of tangent space you have to transform L into tangent space (3 dot products per vertex).

and yes,all it depends on application needs.

Originally posted by Michail Bespalov:
Number of passes doesn’t depend on space.The only difference I can see is that in case of tangent space you have to transform L into tangent space (3 dot products per vertex).

Yes, it doesn’t depend on the space, it depends on the particular technique we
use for per-pixel lighting. If we calculate lighting in the tangent space, we supply
per-vertex tangent space light vector, which must be interpolated and normalized between
vertices. We use a normalization cube map for it. In the object space a light vector is one for an object. We have to normalize it only once by CPU. So instead of supplying
a normalization cube map to the register combiners, we can supply any other texture.
Can it decrease number of passes? I think yes.
Alexei.

this is not true if you have a pointlightsource wich can move around ( else you can do a lightmap, and yes, then you save some passes… )

if you have a directional light you dont need to have to renormalize it with a cubemap, too… and why? cause the possible error is so small that it is smaller than the 8bitpercomponentnormal itselfs

but at the moment u use a pointlight, you need the normalizationcubemap if it gets near to the surface…


In the object space a light vector is one for an object.

Only for infinite light source.

Object-space bump mapping is cheaper under certain conditions. Tangent-space bump mapping is far more general and not terribly more expensive on GeForce hardware.

Originally posted by davepermen:

if you have a directional light you dont need to have to renormalize it with a cubemap, too… and why? cause the possible error is so small that it is smaller than the 8bitpercomponentnormal itselfs

I’m not so optimistic about that. The error would be significant in case of non-smooth geometry.
Alexei.

i have nonsmooth geometry ( looking like a t-figher of starwars with more thick wings… for example ), and it looks nice… if you have some curves, you normaly do them round enough with your vertices yourself at designing that they look good, means if it is a “hard” curve, you use much, small faces… else less, bigger ones… and if the mesh looks good with the “emulated” curves for you as designer, then they look good for the coder cause the error is simply small enough to forget… and if the ****ing light goes too near, you can use a second pass and use a cubemap… but for most the meshes in the scene except a possibly one near the light you dont need it… you have really to move directly to the object with the light to see the error… and the mesh has to be near the cam… i think for this case you can switch… the far away objects dont even get a bumpmap cause you dont see it…

lod is not just for quadtrees good… its too for textures ( mipmaps ) and for renderingprecision ( ppl_with_bump, pvl_with_bump, pvl, pol, clippedagainstfarclipplane… )

thats how it is… live is hard for coders today…

but its your job to get a nice thing fast… look at xisle… grass land bout 10 meters… then it is flat… trees for bout 20meters… then they are bilboards… etc… and just with this optimizations you get the stuff you want ( shadows are just for near objects, too… )

I’ve still another question.
Is it possible to disable self-shadowing so that the lightsource can be on both sides of the face and you can always see the bumpmap?

If the dotproduct is smaller than zero it must be multiplied with -1. Is this somehow possible with register combiners? Or is there another way?

Thanks
LaBasX2

Originally posted by LaBasX2:

Is it possible to disable self-shadowing so that the lightsource can be on both sides of the face and you can always see the bumpmap?

It depends on the way you apply self-shadowing.

Originally posted by davepermen:
i have nonsmooth geometry ( looking like a t-figher of starwars with more thick wings… for example ), and it looks nice…
if you have some curves, you normaly do them round enough with your vertices yourself at designing that they look good, means if it is a “hard” curve, you use much, small faces… else less, bigger ones…

Well, you can optimize your mesh enouh to look good without normalization. But you still have to send (unnecessary) per-vertex information to the pipeline. Imagine a common scene. Day (usually no point lights). Sun (infinite light source). Lawn. Wall (our static geometry). And each frame you supply per-vertex tangent-space light. What for? Isn’t it simplier (and less expensive) to apply object space bump mapping here?
Alexei.

P.S. Ok, I think we shouldn’t be confined to one method, especially if the situation doesn’t demand it.

he means with selfshadowing twosided lighing, wich dont work standartly… you can do it in different ways, for example checking per vertex wich side you have and switch the lightdir at the normal ( relfecting ) or do it per pixel ( color0 = lightdir, color1 = reflected ligh dir ) or do it with one vec one time with GL_EXPAND_NORMAL and GL_hm… INVERT_NORMAL or EXOPAND_INVERT … one of these… and then check the larger one… whatever…

to Alexei_Z… do what you want, if you like your way, and if it is a nice way ( looks good and is fast ), choose it… but i like mine, simply cause you cant use yours on nonstatic objects ( and on a gf2 even with double precision HILO normal maps ), and i LIKE it on the nonstatic ones… its great to have a field of enemyspaceships ( bout … 50, more or less, depends on how good you are ) and you and a fully dinamic asteroid field… or flieing over a planet where the landscape can be destroied by anything… and THERE, WHERE STUFF MOVES, bumpmaps look really nice, not on static objects… ( my opinion… )

for examlpe a simple landscape with a directional sunlight… it never needs a bumpmap, as long as the landscape is static… you cant see it… you dont have to do this work every frame on you gpu… ( poor one )… but at the moment the landscape can move, or moving lights are on it, you have to bump… and it does not depend what of those are moving if you have tangentspace… else you have problems… ( and you can use one "detail"bumpmap… no need for creating a texture for every tile of the field… means saving a lot of memory… bumpmaps are much smaller AND detailed if you use tangent space, cause you can use them then several times at several places… else you have to organice them like ordinary lightmaps… and i dont like lightmaps…

my two cents… ( or more )

Well, your two cents
However, what do you call “nonstatic objects”? Space ships? Asteroids? Planet?
I always thought they had inherently static geometry. Can be destroyed? I think, the normal map,as well as the tangent space, will be destroyed too (ok, not always). Well, my example with a wall is not very
good one, because a wall can’t move… But spaceships can, it would be a better example. A spaceship lit by star…
It can move, and very fast… Ok, I prefer an object space bumpmapping in this situation. It looks good and it is cheap. And no problem with relativity of movement…
Alexei.

P.S. Well, I like how you defend your favorite method (actually, I love it too! But no time for implementing yet ),
and thanks for the interesting discussion!

Thanks for your help again!

i mean just moveable objects, moving in the worldspace around and even in the objectspace ( animated meshes… ) or realtime deformable meshes… etc… just everything… this cant be done without recalculation of the normals… and there its much much cheaper to just recalc the tangentspace… or how do you create your normalmap for worldspace stuff? ( thats interesting, youre right… but after half a year i got my tangentspace working… so i am happy with it )

the only problem is to orient yourself in the different spaces… espencially if you write it as vertexprogram… ( dont got specular lighting yet… i dont know where my eye is )… its terrible there cause you just have r0, r1, etc… r15 i think is last… and wich is now my vertex in object, in world, in screen and in tangentspace, wich is the lightvec in wich etc…

terrible…

best way for your stuff is: quakelightmaps… you extend it simple… let the lightmap be, just put a normal map to it… and then you can do the specularpart of the lighting in realtime ( cause this changes even on static lighing ) and let the diffuse/shadow etc be like it whas before… very fast and cheap…