PDA

View Full Version : Bump mapping



LaBasX2
04-18-2001, 10:14 AM
Hi!

Is it true that you can't repeat a bump map on a large polygon using dot3 bump mapping and register combiners? I think I've read that in some document and in my own app that also doesn't work. What will you do if you have a large wall in your map where the brick texture is repeated several times and you want your bumpmap to fit this brick-texture? Is there a way to get this working?

Thanks
LaBasX2

davepermen
04-18-2001, 11:25 AM
should work nice if your u and v direction of the map are perpenticular in worldspace and u use a normalizationcubemap for renormalizing the point_to_light vector per pixel..

Alexei_Z
04-18-2001, 10:57 PM
Originally posted by davepermen:
should work nice if your u and v direction of the map are perpenticular in worldspace and u use a normalizationcubemap for renormalizing the point_to_light vector per pixel..

If it is just a wall, it would be better to use the object space bumpmapping.
No normalization cube map, no tangent-space light vectors.
And no problem.
Alexei.

Michail Bespalov
04-18-2001, 11:47 PM
Is it true that you can't repeat a bump map on a large polygon using dot3 bump mapping and register combiners?

No.



If it is just a wall, it would be better to use the object space bumpmapping.
No normalization cube map, no tangent-space light vectors.


...and have unique normal map for each wall.
and recompute normal map each time when nonstatic geometry changes or moves.And you still need cube map for large triangles and closer light,object-space light vectors will become unnormalized as well as tangent-space light vectors.



And no problem.


For me this method is one big problem.
Am I wrong ?



[This message has been edited by Michail Bespalov (edited 04-19-2001).]

Alexei_Z
04-19-2001, 12:45 AM
Michail, you are right.
However, I didn't say that the method is better than the tangent space bumpmapping.
I said "if it is just a wall,...". Ok, in other words, it depends on the particular
requirements of your application. Yes, you should have a unique normal map for each
object. But a few different bricks... And I believe a wall is usually a static object
so there's no need of recomputing it. http://www.opengl.org/discussion_boards/ubb/smile.gif
I also agree there are problems with closer lights. And what if you don't use them?
And how about speed you can gain by using object space bump mapping with fewer passes
than in the more robust one?
So it depends...
Alexei.

LaBasX2
04-19-2001, 12:54 AM
Thanks for your help!

I've found a bug in my code. Repeating bumpmaps really works but of coarse you have to set the texture parameters to GL_REPEAT while the distance attenuation needs it set to GL_CLAMP. That was the thing I was missing.

Michail Bespalov
04-19-2001, 03:06 AM
And how about speed you can gain by using object space bump mapping with fewer passes
than in the more robust one?


All that you can do in object space you
can do in tangent space with the same number
of passes.Number of passes doesn't depend on space.The only difference I can see is that in case of tangent space you have to transform L into tangent space (3 dot products per vertex).

and yes,all it depends on application needs.

Alexei_Z
04-19-2001, 05:55 AM
Originally posted by Michail Bespalov:
Number of passes doesn't depend on space.The only difference I can see is that in case of tangent space you have to transform L into tangent space (3 dot products per vertex).

Yes, it doesn't depend on the space, it depends on the particular technique we
use for per-pixel lighting. If we calculate lighting in the tangent space, we supply
per-vertex tangent space light vector, which must be interpolated and normalized between
vertices. We use a normalization cube map for it. In the object space a light vector is one for an object. We have to normalize it only once by CPU. So instead of supplying
a normalization cube map to the register combiners, we can supply any other texture.
Can it decrease number of passes? I think yes.
Alexei.

davepermen
04-19-2001, 06:24 AM
this is not true if you have a pointlightsource wich can move around ( else you can do a lightmap, and yes, then you save some passes.. http://www.opengl.org/discussion_boards/ubb/smile.gif )

if you have a directional light you dont need to have to renormalize it with a cubemap, too.. and why? cause the possible error is so small that it is smaller than the 8bitpercomponentnormal itselfs http://www.opengl.org/discussion_boards/ubb/wink.gif

but at the moment u use a pointlight, you need the normalizationcubemap if it gets near to the surface..

Michail Bespalov
04-19-2001, 06:57 AM
In the object space a light vector is one for an object.

Only for infinite light source.

cass
04-19-2001, 07:06 AM
Object-space bump mapping is cheaper under certain conditions. Tangent-space bump mapping is far more general and not terribly more expensive on GeForce hardware.

Alexei_Z
04-19-2001, 07:21 AM
Originally posted by davepermen:

if you have a directional light you dont need to have to renormalize it with a cubemap, too.. and why? cause the possible error is so small that it is smaller than the 8bitpercomponentnormal itselfs http://www.opengl.org/discussion_boards/ubb/wink.gif

I'm not so optimistic about that. The error would be significant in case of non-smooth geometry.
Alexei.

davepermen
04-19-2001, 07:30 AM
i have nonsmooth geometry ( looking like a t-figher of starwars with more thick wings.. for example ), and it looks nice.. if you have some curves, you normaly do them round enough with your vertices yourself at designing that they look good, means if it is a "hard" curve, you use much, small faces.. else less, bigger ones.. and if the mesh looks good with the "emulated" curves for you as designer, then they look good for the coder cause the error is simply small enough to forget.. and if the ****ing light goes too near, you can use a second pass and use a cubemap.. but for most the meshes in the scene except a possibly one near the light you dont need it.. you have really to move directly to the object with the light to see the error.. and the mesh has to be near the cam.. i think for this case you can switch.. the far away objects dont even get a bumpmap cause you dont see it..

lod is not just for quadtrees good.. its too for textures ( mipmaps ) and for renderingprecision ( ppl_with_bump, pvl_with_bump, pvl, pol, clippedagainstfarclipplane.. )

thats how it is.. live is hard for coders today.. http://www.opengl.org/discussion_boards/ubb/wink.gif

but its your job to get a nice thing fast.. look at xisle.. grass land bout 10 meters.. then it is flat.. trees for bout 20meters.. then they are bilboards.. etc.. and just with this optimizations you get the stuff you want ( shadows are just for near objects, too.. )

LaBasX2
04-19-2001, 09:46 AM
I've still another question.
Is it possible to disable self-shadowing so that the lightsource can be on both sides of the face and you can always see the bumpmap?

If the dotproduct is smaller than zero it must be multiplied with -1. Is this somehow possible with register combiners? Or is there another way?

Thanks
LaBasX2

Alexei_Z
04-20-2001, 12:31 AM
Originally posted by LaBasX2:

Is it possible to disable self-shadowing so that the lightsource can be on both sides of the face and you can always see the bumpmap?


It depends on the way you apply self-shadowing. http://www.opengl.org/discussion_boards/ubb/smile.gif



Originally posted by davepermen:
i have nonsmooth geometry ( looking like a t-figher of starwars with more thick wings.. for example ), and it looks nice..
if you have some curves, you normaly do them round enough with your vertices yourself at designing that they look good, means if it is a "hard" curve, you use much, small faces.. else less, bigger ones..

Well, you can optimize your mesh enouh to look good without normalization. But you still have to send (unnecessary) per-vertex information to the pipeline. Imagine a common scene. Day (usually no point lights). Sun (infinite light source). Lawn. Wall (our static geometry). And each frame you supply per-vertex tangent-space light. What for? Isn't it simplier (and less expensive) to apply object space bump mapping here?
Alexei.

P.S. Ok, I think we shouldn't be confined to one method, especially if the situation doesn't demand it.

davepermen
04-20-2001, 06:29 AM
he means with selfshadowing twosided lighing, wich dont work standartly.. you can do it in different ways, for example checking per vertex wich side you have and switch the lightdir at the normal ( relfecting ) or do it per pixel ( color0 = lightdir, color1 = reflected ligh dir ) or do it with one vec one time with GL_EXPAND_NORMAL and GL_hm.. INVERT_NORMAL or EXOPAND_INVERT http://www.opengl.org/discussion_boards/ubb/smile.gif.. one of these.. and then check the larger one.. whatever..

to Alexei_Z.. do what you want, if you like your way, and if it is a nice way ( looks good and is fast ), choose it.. but i like mine, simply cause you cant use yours on nonstatic objects ( and on a gf2 even with double precision HILO normal maps http://www.opengl.org/discussion_boards/ubb/wink.gif http://www.opengl.org/discussion_boards/ubb/wink.gif ), and i LIKE it on the nonstatic ones.. its great to have a field of enemyspaceships ( bout .. 50, more or less, depends on how good you are http://www.opengl.org/discussion_boards/ubb/smile.gif ) and you and a fully dinamic asteroid field.. or flieing over a planet where the landscape can be destroied by anything.. and THERE, WHERE STUFF MOVES, bumpmaps look really nice, not on static objects... ( my opinion.. )

for examlpe a simple landscape with a directional sunlight.. it never needs a bumpmap, as long as the landscape is static.. you cant see it.. you dont have to do this work every frame on you gpu.. ( poor one http://www.opengl.org/discussion_boards/ubb/smile.gif ).. but at the moment the landscape can move, or moving lights are on it, you have to bump.. and it does not depend what of those are moving if you have tangentspace.. else you have problems.. ( and you can use one "detail"bumpmap... no need for creating a texture for every tile of the field.. means saving a lot of memory.. bumpmaps are much smaller AND detailed if you use tangent space, cause you can use them then several times at several places.. else you have to organice them like ordinary lightmaps.. and i dont like lightmaps..

my two cents.. ( or more )

Alexei_Z
04-20-2001, 08:14 AM
Well, your two cents http://www.opengl.org/discussion_boards/ubb/smile.gif
However, what do you call "nonstatic objects"? Space ships? Asteroids? Planet?
I always thought they had inherently static geometry. Can be destroyed? I think, the normal map,as well as the tangent space, will be destroyed too (ok, not always). http://www.opengl.org/discussion_boards/ubb/smile.gif Well, my example with a wall is not very
good one, because a wall can't move... But spaceships can, it would be a better example. http://www.opengl.org/discussion_boards/ubb/smile.gif A spaceship lit by star...
It can move, and very fast... Ok, I prefer an object space bumpmapping in this situation. It looks good and it is cheap. And no problem with relativity of movement...
Alexei.

P.S. Well, I like how you defend your favorite method (actually, I love it too! But no time for implementing yet http://www.opengl.org/discussion_boards/ubb/frown.gif ),
and thanks for the interesting discussion! http://www.opengl.org/discussion_boards/ubb/smile.gif

LaBasX2
04-20-2001, 12:23 PM
Thanks for your help again!

davepermen
04-21-2001, 02:44 AM
i mean just moveable objects, moving in the worldspace around and even in the objectspace ( animated meshes.. ) or realtime deformable meshes.. etc.. just everything.. this cant be done without recalculation of the normals.. and there its much much cheaper to just recalc the tangentspace.. or how do you create your normalmap for worldspace stuff? ( thats interesting, youre right.. but after half a year i got my tangentspace working.. so i am happy with it http://www.opengl.org/discussion_boards/ubb/smile.gif )

davepermen
04-21-2001, 02:49 AM
the only problem is to orient yourself in the different spaces.. espencially if you write it as vertexprogram.. ( dont got specular lighting yet.. i dont know where my eye is http://www.opengl.org/discussion_boards/ubb/wink.gif ).. its terrible there cause you just have r0, r1, etc.. r15 i think is last.. and wich is now my vertex in object, in world, in screen and in tangentspace, wich is the lightvec in wich etc..

terrible..

best way for your stuff is: quakelightmaps.. you extend it simple.. let the lightmap be, just put a normal map to it.. and then you can do the specularpart of the lighting in realtime ( cause this changes even on static lighing ) and let the diffuse/shadow etc be like it whas before.. very fast and cheap..

Alexei_Z
04-21-2001, 06:26 AM
Originally posted by davepermen:
or how do you create your normalmap for worldspace stuff?
I take a texture (or an array, no matter), representing a height field. Then I convert it into a normal map, just as if I calculate vertex normals for a mesh. So I get a "tangent-space" normal map. Then I take my object's mesh and calculate a transformation matrix from tangent space to object space for each normal, and then multiply it with the normal. (Actually, for my modeler's meshes I just use functions, by which the mesh was created, so I don't need high-tessellated meshes). Then I store the transformed normals in an RGB(A) texture. All this I've built in my modeler program, so I need only a source bitmap from photoshop (or any... http://www.opengl.org/discussion_boards/ubb/smile.gif ), or an array of height field, constructed from high-tessellated geometry...

Object space bumpmapping is good for static geometry. It is fast. It doesn't load CPU and AGP bus as tangent-space technique does. Main drawback is that a bumpmap cannot be repeated on a curved surface, as you pointed it out above. However, it's not a problem if the bumpmap represents not just roughness of the surface but tiny geometrical details, like parts of machines and mechanisms, sculpture images, etc. They usually don't repeat exactly the same...
As to orientation in space... I don't use vertex programs yet. http://www.opengl.org/discussion_boards/ubb/frown.gif (That's because I'm still with official 6.5 drivers...) Maybe there are problems with it, I don't know.
Ok, enough is enough, this gets very long one! http://www.opengl.org/discussion_boards/ubb/smile.gif
Alexei.

davepermen
04-21-2001, 06:42 AM
so you create the tangentspace to create the normals.. ok.. great http://www.opengl.org/discussion_boards/ubb/smile.gif i will think bout it.. but now i want the ****ing specular working http://www.opengl.org/discussion_boards/ubb/wink.gif

its just boring.. everything is projected to screenspace ( through modelview&projection matrix.. the position, the light, the tangentspace.. ) .. in this space, the eyepos is ( 0, 0, 0 ), or isn't it? so i have to calculate the halfangle ( normalized( normalized( point_to_eye ) + normalized( point_to_light ) ).. i thought i have done this.. damn..

Alexei_Z
04-21-2001, 10:09 AM
Yes, you should transform your vectors from eye space to object space (and to tangent space if you need). But what's the ifference between a light vector and a half-angle vector? You calculate your h-vector in
the world (eye) space, and then convert it into object space just as you do it with l-vector... However there is another problem with specular lighting. Good results are accepted with specular exponent about 128 or so... It cannot be achieved with GF(2) register combiners. The most accurate approximation which I got is
Intensity = 4096*((N' dot H)-0.7828)^6 + 256*((N' dot H)-0.7828)^4,
it's better than commonly used (N' dot H)^8 or (4*((N' dot H)-0.75))^2
(without Max(0,...) for simplicity), but still much worse than ^128...
With the approximation mentioned above it's possible to do diffuse@specular lighting
with two passes (with one general stage in register combiners for the first pass).
Unfortunately self-shadowing is correct only for diffuse lighting. For specular lighting it needs software emulation or additional passes...
So I think it's better to use a cube map to encode specular lighting in it.
But not very shiny objects can get along without a cube map. http://www.opengl.org/discussion_boards/ubb/smile.gif
Well,I must go, bye to everyone (who are interested in bumpmapping http://www.opengl.org/discussion_boards/ubb/smile.gif ).
Alexei.


[This message has been edited by Alexei_Z (edited 04-21-2001).]