PDA

View Full Version : dot3 bump mapping quality



jorge1774
03-13-2003, 11:18 AM
Hi all! This is my first post!

I've been working on the dot3 bump mapping with a geforce4 ti4200, and I have some questions about the quality of the effect.

I've tested only dot3 in combiners with normalization, which I think is VERY poor for long area polygons, and I doubt that this is the method used on some games, like motogp or doom3.

Which is the best quality bump mapping ? :

1) a vertex program transforming lightvec, half angle vector to tangent space, normalization and dot3 in the combiners

2) a vertex program transforming lightvec, half angle vector to tangent space, normalization with two cubemaps, and dot3 in the combiners

3) a vertex program transforming lightvec, half angle vector to tangent space, dot3 via a texture shader (normalization required?)

4) supply lighvec and half angle by vertex (in tangent space), calculated on the CPU, then combiners + normalization cubemaps

I have some question about texture shaders, because I have seen some nvidia demos without normalization using texture shaders, but I suposse they are nvidia specific demos, can anyone tell something about that.

I want to test all the possibilities and I will post my results here, as soon as I finish them

See you.

jorge1774
03-13-2003, 11:33 AM
One more thing...

When I said quality I'm talking about the quality of the specular highlight, not the diffuse, which is really good.

I think the lose of quality is because the halfangle vector is calculated in a vertex program, and twice normalized on the vertex program, then normalized in the combiners, and 4 times squared...this takes me to a highlight that seems a 8 bits color highlight, and nearly linear on big and close to screen polygons...

Thanks all.

vincoof
03-13-2003, 11:55 AM
For high-quality specular highlights, you may use HILO textures available on NVIDIA hardware (GF3+ I think)

l33t
03-13-2003, 06:11 PM
There should be no difference between CPU and vertex program vectors.

I would transform the normal vector (read out of the normal map) into light (object) space, instead; this allows you to do per-pixel envmap. This takes 3 dp ops so it's marginally doable on GF3; doable on Radeon 8500; and expected on anything with ARB_fragment_program :-).

If you get an interpolated value out of your normal map, and it's not of super-high resolution, you may need to normalize after you read it; something like read->normalize->transform-into-object-space. That level of dependency may need ARB_fragment_program hardware, unless you're OK with storing intermediates in a render target.

vincoof
03-13-2003, 11:31 PM
I agree but then in object-space HILO textures are not available. HILO should be used in tangent-space only, afaik.

jorge1774
03-14-2003, 02:59 AM
Hi!

Why HILO textures are not available in object space? Can you explain that?

I have tested that there is no difference between transforming light vector to tangent space in vertex program or in the cpu, but I can see big difference between normalization in register combiners or cubemaps.

The cubemap normalized version gives me more precision in the contour of the specular highlight, but less precision in terms of colour degradation. Seems more 8 bits color interpolation than before.

See you.

raverbach
03-14-2003, 04:43 AM
The dot3 bumpmapping used in doom III engine , far cry and tenebrae ( a quake modification) are all based in CPU lightvec and half-angle tangent space transformation and specific (i.e NV_register combiners) or generic hardware(GL_DOT3_RGBA_EXT) combiners.
The specular highlight is majorly done multipass , (Carmack said that the R300 and RV 35 chipsets , can support single pass ones,and achieve great results).
Then they use a pixel shader and stencil buffer to enhance shadows even further ..
but ithink that your major disregards on dot3 is due to the textures u´re using, you need a normal map , and a high quality texture to make your specular highlights look good and shine ..
One question do you live in Brazil ?

raverbach
03-14-2003, 04:46 AM
to be more spefcific , i guess that doom III uses only ARB2 extension and vendor specific extension combiners , but tenebrae uses ARB generic dot3 ext.
http://www.opengl.org/discussion_boards/ubb/smile.gif

vincoof
03-14-2003, 10:51 AM
HILO may be solely used in tangent space because HILO, as it sounds, has 2 components only : X and Y. Then the Z component is computed automatically, considering that (X,Y,Z) is a unit vector and considering that Z is positive (if Z can be negative, there are up to 2 solutions to the equation).

In tangent-space, the Z component is always positive. That's perfect for HILO's positive Z assumption. But in object-space, the Z component may be negative as well as positive, so HILO can not work (unless you have a very special mesh that ensures all perturbed normals have a positive Z, in object-space).

jwatte
03-14-2003, 12:25 PM
Vincoof,

I don't see the problem. The normal map still stores a tangent-space relative normal. That's what you fetch out of the texture. Then you use the fragment program to transform that normal into object space -- that's after the texel has been fetched.

jorge1774
03-15-2003, 12:05 AM
Hi!

Thanks about the HILO explanation, but I still don't know why normalization in cubemap gives me more perfect highlight, but it lose quality in the color scale...any idea?

Thanks!

jorge1774
03-15-2003, 12:10 AM
Hi raverbach.

About GL_DOT3_RGBA_EXT, I still don't know how to use that, just because there is no way of normalizing the lightvec or halfhangle vector in texture shaders (per pixel) before the DOT3...or there is?

I live in spain.

See you!

jorge1774
03-15-2003, 12:21 AM
And still more stupid questions...

Is there any way the geforce4 can interpolate vectors in angular coordinates? not as separate r,g,b or s,t,q???

Thanks all.

Korval
03-15-2003, 01:38 AM
About GL_DOT3_RGBA_EXT, I still don't know how to use that, just because there is no way of normalizing the lightvec or halfhangle vector in texture shaders (per pixel) before the DOT3...or there is?

nVidia has a really good number of demos and papers on tangent-space bump mapping. They explore all the problems of the method and solutions to those problems.

In this case, you should use a "renormalization" cube map. You pass your 3-vector as a texture coordinate that, when looked up in this cube map, produces a color-vector in that direction which is normalized. This color-vector can be used to do your dot-product.

jorge1774
03-15-2003, 01:50 AM
Hi!

I have read somewhere on the forum that the only way to do with the half vector is calculating per vertex the POINT TO LIGHT vector and the POINT TO EYE vector, then calculating the half angle ON THE COMBINERS, and normalize it throught an aproximation. This seems to be my problem.

I will probably test reflected light vector in tangent space, which is :

Rx = -Ly;
Ry = Lx;
Rz = Lz;

And then doing: normalize(dot(R,ToEye))^n , in the combiners.

I hope this gives better results, and helps someone else.

Thanks.

jorge1774
03-15-2003, 01:56 AM
One more thing:

I still don't know why NVIDIA on their Cg demos use normalization of Half vector per vertex, because that's incorrect.

See you.

MZ
03-15-2003, 08:11 AM
jorge1774, I've read your "half vector in pixel shading" thread too, and I'm afraid you are trying to achieve effect which is impossible on GF3/4.

Carmack in his plan (2003-01-29) explained it very nicely:
(...)Per-pixel reflection vector calculations for specular, instead of an interpolated half-angle. The only remaining effect that has any visual dependency on the underlying geometry is the shape of the specular highlight. Ideally, you want the same final image for a surface regardless of if it is two giant triangles, or a mesh of 1024 triangles. This will not be true if any calculation done at a vertex involves anything other than linear math operations(my highlight). The specular half-angle calculation involves normalizations, so the interpolation across triangles on a surface will be dependent on exactly where the vertexes are located. The most visible end result of this is that on large, flat, shiny surfaces where you expect a clean highlight circle moving across it, you wind up with a highlight that distorts into an L shape around the triangulation line.


About Nvidia demos and texture-shaders: texture-shaders lets you achieve *very* good quality (smooth, high exponent) specular highlights, but *only* in following situations:

1. infinite light and infinite eye (DOT_PRODUCT_CONST_EYE_REFLECT_CUBE_MAP)
2. infinite light and local eye (DOT_PRODUCT_REFLECT_CUBE_MAP)
3. local light and infinite eye (DOT_PRODUCT_REFLECT_CUBE_MAP, if you swap E and L in Phong formula)

But what we want is "local light and local eye", and this is unfortunately impossible with TS.
Nvidia demos look nice because either:

1. most of them use infinite lights only (so lighting can be done with full precision per-pixel in TS)
2. they use very highly tesselated models (so lighting can *partially* done in VP per-vertex, because high tesselation can hide distortions caused by interpolation). See Cg demo "bump_reflect_local_light", and try to magnify model as much as possible, then you'll see something is fake (interpolated) in specular highlight.

Of course, neither of 2 above methods can be used in typical FPS scene. You are screwed, all you can do is low-precision low-exponent specular as in Doom3.

jwatte
03-15-2003, 11:27 AM
The graphics cards just interpolate values. They don't care (much) if they're vectors, or colors, or bank account balances, per vertex.

If you want to interpolate in spherical coordinates, just stuff in spherical coordinates, and they will get interpolated. However, then it's up to you to use those spherical coordinates appropriately in your fragment program or register combiners, which may be tricky.

dorbie
03-15-2003, 01:13 PM
The real quality problem is interpolating normalized vectors vs the precision & range of unnormalized vectors(positions), NOT lerp vs slerp. The fragment based data types on a lot of hardware lack the precision & range to store local positions accurately. Typically they are used to store normalized positions and the interpolation of normalized positions is simply wrong. Not because it's linear but because it plain points in the wrong direction during the interpolation that a linear interpolation would get right on unnormalized position vectors. A spherical interpolation would NOT get this right. You can compromise by trying to squeeze unnormalized data into the piddly fragment types and scaling or renormalizing after interpolation but it leads to more quantized vector artifacts etc. Subdivision works because there are more correct normalized vertex vectors to better approximate the correct local vector to position direction.

What I've described is still fundamentally different from a vector spherical interpolation, but that's OK because a spherical interpolation is not needed and wouldn't fix this problem.

I think I see why this has been called spherical data but I think that's slightly missleading although I kinda agree it's spherical in nature since it radiates spherically from the position in 3D, it's pre interpolation normalization vs post interpolation normalization, and there are some intermediate options for this too. The underlying problem is/was the inherent hardware precision and range limits for interpolating fragment triplets. This problem has largely gone away with better hardware, but if you rip off an older vertex program that feeds normalized color vectors to the fragment program you'll still be wrong even on the best hardware.

I'm puzzled by anyone calling this "spherical coordinates" though. It's a 3D position, nothing more, ultimately it's linear interpolation doesn't give you a slerp, it's a hyperbolic function akin to perspecive correction (I think) What ends up on the correct polygon is a section through a spherical field. It could be done with 3D texture coords and 3D texture holding a sphere of vector data, this has been discussed before though.

[This message has been edited by dorbie (edited 03-15-2003).]

jorge1774
03-16-2003, 01:22 AM
Hi again!

First...where is carmack's plan?

Can someone explain that again :

1. infinite light and infinite eye (DOT_PRODUCT_CONST_EYE_REFLECT_CUBE_MAP)
2. infinite light and local eye (DOT_PRODUCT_REFLECT_CUBE_MAP)
3. local light and infinite eye (DOT_PRODUCT_REFLECT_CUBE_MAP, if you swap E and L in Phong formula)

Specially the first and last one, because the concept of infinite eye points me to something I can imagine...except plain 2D (which is not the case)

I want to archieve correct highlight with infinite light and local eye, so you said can be done perfect in texture shaders, and I don't think so. That's because I still haven't seen an nvidia demo that archieves good precision on large polygons...can you point me to one?

And about spherical coordinates, can someone point to a previous thread about that? I try to find it, but I didn't success.

I think too, that much of the wrong highlight comes from interpolation lack of precision, and interpolating separately position and a vector which involves the position itself (I still have to think about that) . Maybe in future hardware it will be possible to find real half hangle vector or reflected by pixel, throught the 3D position in the raster...

I still think that spherical coordinates can give you more precision, or at least more than normalization in combiners (I have to think on normalization cubemaps), but don't really know if this can be possible. Why? Only because I have a BIG lack of information about the card, and the way it interpolates coordinates...and exactly, how the perspective correction is done (not in software, but in Geforce3/4), because it would be impossible interpolate spherical without that information (not only vector, but vector + angle or something similar in a way I still don't know)

And one more thing...linear interpolation of scalar values on the raster are all done with perspective correction? I'm assuming this is true for a geforce 3+ ...

Thanks all!

vincoof
03-17-2003, 10:58 AM
jwatte, I mean it's not possible to use HILO textures in object-space because I assume the GeForce4 (the card that the original poster is using) doesn't support ARB_fragment_program, and because I don't know of any 'reasonable' transformation that could be performed in a texture shader or register combiners that could transform per-pixel a coordinate from object-space to tangent-space or vice-versa.

jorge, the famous Carmack "dot plan" can be found in many places over the web, for instance here : www.webdog.org/plans/1 (http://www.webdog.org/plans/1/)

infinite eye involves the asumption that all vectors from the eye to any point in the scene are colinear, eg colinear to the "main camera direction". It's a good approximation except for points that are very close to the camera. By default, OpenGL uses GL_LIGHT_MODEL_LOCAL_VIEWER set to FALSE, which means that OpenGL computes light equations with a unique eye-to-point vector, which is the Z axis in eye-space.

[This message has been edited by vincoof (edited 03-17-2003).]

jorge1774
03-17-2003, 10:21 PM
Hi!

About local viewer or not, on opengl, it is very noticeable the difference on big large (near screen), polygons?

Some people tells that with an infinite viewer, you can get perfect per pixel lighting on geforce3, but I still don't know why or how...and most important, how you can do that and still be waiting for a highlight moving throught a polygon...

Thanks.

vincoof
03-17-2003, 11:45 PM
Local viewer or not, the difference is not really *noticeable* as long as all your scene is rendered using the same model. If half of your objects use local viewer and other half use infinite viewer, then yes it will be noticeable.

It is possible on GeForce3 just like it was described in the posts above : if your light OR your eye is infinite, then GF3/4 hardware can do it.

[This message has been edited by vincoof (edited 03-18-2003).]

jorge1774
03-18-2003, 12:37 AM
Hi.

If my light is infinite, and my eye is local, there is a BIG distortion of the specular highlight on near to screen, big, polygons.

I can't talk about infinite eye, because I haven't tested it yet.

See you.

jorge1774
03-18-2003, 12:57 AM
Hi!

With infinite eye and infinite light, what I have is a flat surface, just in case normals are near to flat on my polygon.

Nvidia uses very specific objects near to the real world games (I think)

See you.

vincoof
03-18-2003, 03:35 AM
infinite light + local eye should give almost the same result as infinite eye + local light.

The distortion can not be corrected on GF3/4-class hardware, according to Carmack's dot plan quoted above. At best you could tesselate your model. At worst you could make sure that your camera never displays the low-tesselated geometry over a big part of the screen.

MZ
03-18-2003, 09:50 AM
infinite light + local eye should give almost the same result as infinite eye + local light.
Arent't we confusing something? For me "infinite light" == "directional light", and "local light" == "point light". So "local vs. infinite" with respect to eye is only quality vs. performance tradeoff, but with respect to light, it is completely different scene object and effect. I don't think the configurations you mentioned can produce comparable result.

However, local vs. infinite has the same mathematical consequences for both light and eye in Phong formula, because they are symetrical. So there might be some chance you were right, though. Have you tested it?

When I was testing local lights in indoor environment, I found infinite eye + local light not acceptable. When camera was static, rendered scene was looking good (cheat was hard to detect). But when camera rotated, highlights at walls were moving, in very unnatural way. I tried another cheat: instead of using always (0,0,1) vector as eye direction, I used vector pointing from eye to light source. The result was surprisingly good, highlights stopped to move while rotating camera, all looked like true local eye + local light. Unfortunately, everything went bad when I moved closer to light, or turned back at it.


The distortion can not be corrected on GF3/4-class hardware, according to Carmack's dot plan quoted above.
Actually, I managed to get local light + local eye with texture shaders' reflection and acceptable precision, but it was too slow to be practical (2 passes with *slow* dependant reads + CopyTexImage in between, and all this for specular term only)

Here is example: http://mz_000.tripod.com/images/specularGF3.jpg

Specular exponent here is set to 32, some precision is lost, but it is still much higher than in case of calculation of exponent in RC. Maybe in up to 640 x 480 resolution, up to 3 lights, and efficient RTT implementation (CopyTexImage eats 35% of framerate now) it would have practical value, but now I just abandoned it. Or maybe in special rendering path for Doom3, running at 320 x 200, to get more look-and-feel of original Doom http://www.opengl.org/discussion_boards/ubb/wink.gif

MZ
03-18-2003, 09:53 AM
jorge, I was under impression you need local eye + local light (you were the one who mentioned doom3 first in this thread) but if you say you can accept infinite lights, then you are lucky - you *can* have 100% geometrically correct, polygon-size independent, L-shaped artifact free specular highlights. Just abandon half-angle, and do Phong instead, either in RC or in TS.

If you choose TS, you can also have smooth, high exponent specular, impossible to get with RC only. See "Issues" section of NV_texture_shader spec, they give hints about usage, especially the one titled: "Does this extension support so-called 'bump environment mapping'?".

About NV demos, you didnt see any, because they use half-angle for specular lighting. They can afford it because of their highly tesselated models, what hides incorrections you want to avoid. See any NV demo showing bump mapped reflections, they use DOT_PRODUCT_REFLECT:
- dot_product_reflect
- dot_product_reflect_torus
- bumpy_shiny_patch
All you have to do change is to replace environment cube map with cube map containing specular term, and in VP apply rotation of that cube map from world space towards light source.

vincoof
03-18-2003, 10:08 AM
Originally posted by MZ:
Arent't we confusing something? For me "infinite light" == "directional light", and "local light" == "point light". So "local vs. infinite" with respect to eye is only quality vs. performance tradeoff, but with respect to light, it is completely different scene object and effect. I don't think the configurations you mentioned can produce comparable result.
If you compare local light+infinite eye with infinite light+local light you get very similar results, and probably "exact" results if vertex-to-eye and vertex-to-light vectors have the same length.


Originally posted by MZ:
However, local vs. infinite has the same mathematical consequences for both light and eye in Phong formula, because they are symetrical. So there might be some chance you were right, though. Have you tested it?
Yes this is due to the symmetrical vectors. Unfortunately, when I tested it I didn't take a look at the "big low-tesselated polygon" so I can't say that the specular highlight distortion will be the same, but according to the equations I think the distortion should be similar too.


Originally posted by MZ:
I tried another cheat: instead of using always (0,0,1) vector as eye direction, I used vector pointing from eye to light source. The result was surprisingly good, highlights stopped to move while rotating camera, all looked like true local eye + local light. Unfortunately, everything went bad when I moved closer to light, or turned back at it.
I guess you "thought" the result was natural for the special case of your scene, but if you tried with another scene probably it would have gone wrong.
Such tricks are very useful when you have great control of camera location : you can fake physical equations and "give the feeling" of a natural scene pretty easily, but when it comes to a more general case it never succeeds.


Originally posted by MZ:
Here is example: http://mz_000.tripod.com/images/specularGF3.jpg
Can't load the image. "Hosted by Tripod"


Originally posted by MZ:
Or maybe in special rendering path for Doom3, running at 320 x 200, to get more look-and-feel of original Doom http://www.opengl.org/discussion_boards/ubb/wink.gif[/B]
ROFL

MZ
03-18-2003, 10:51 AM
Can't load the image. "Hosted by Tripod"

http://www.opengl.org/discussion_boards/ubb/frown.gif I uploaded it to davepermen's site.
davepermen, I hope you don't mind?
http://www.itstudents.ch/users/dave/free/files/specularGF3.jpg



[This message has been edited by MZ (edited 03-18-2003).]

vincoof
03-18-2003, 12:18 PM
Thanks.
It looks pretty good.
Do the walls, floor and ceiling have a single quad each ? If so, the specular highlight is really impressive.
The framerate seems a bit low, but I guess it can be improved as you described.

jorge1774
03-19-2003, 01:53 AM
Hi!

The dot_product_reflect , and dot_product_reflect_bump can support decal texture in 1 pass...so you need 2 passes, and I still need the flexibility to change my light ambient,specular and diffuse per light, so I will discard the cubemap option

About the reflection vector, I have computed it, and still I can't see a really big difference with the half vector on large polygons... (And I'm talking about INFINITE LIGHTS and LOCAL EYE)

See you, and thanks (you image looks quite good... http://www.opengl.org/discussion_boards/ubb/wink.gif

jorge1774
03-19-2003, 01:55 AM
Hi!

The dot_product_reflect , and dot_product_reflect_bump can support decal texture in 1 pass...so you need 2 passes, and I still need the flexibility to change my light ambient,specular and diffuse per light, so I will discard the cubemap option

About the reflection vector, I have computed it, and still I can't see a really big difference with the half vector on large polygons... (And I'm talking about INFINITE LIGHTS and LOCAL EYE)

See you, and thanks (you image looks quite good... http://www.opengl.org/discussion_boards/ubb/wink.gif

jorge1774
03-19-2003, 03:19 AM
Hi!

I've tested reflection vector WITH INFINITE LIGHT, and LOCAL EYE...and...IT LOOKS REALLY BAD on large polygons near the screen...

I'm sorry...but I still don't know how you can do it perfect on large polygons...

See you.

MZ
03-19-2003, 08:00 AM
Do the walls, floor and ceiling have a single quad each ?Yes, they are all single quads.

vincoof
03-19-2003, 11:36 AM
Originally posted by jorge1774:
I'm sorry...but I still don't know how you can do it perfect on large polygons...

I'm afraid you need fragment programs to get a perfect result. With a GeForce4 you either need to do with the low-quality result, or you need to tesselate your geometry, or (maybe) you could do it with more passes but I haven't figured how to do for a reasonable framerate.

raverbach
03-20-2003, 04:28 AM
Hey MZ
niiice pic , the whole scene is very beautiful , the bumps are real nice and specular ...well ..it shines http://www.opengl.org/discussion_boards/ubb/smile.gif

congrats ! http://www.opengl.org/discussion_boards/ubb/smile.gif

raverbach
03-20-2003, 04:31 AM
simple question , where you got the base texture and maps for the scene ? did you use some plugin like nvdia´s one for photoshop to create the normal map ??
Thks !

jorge1774
03-20-2003, 04:38 AM
Hi.

I tried to do it on the texture shaders, and it looks really better, but 2 render passes...so better I have to forget it...

See you.

MZ
03-20-2003, 08:10 AM
I "borrowed" the textures from Humus demo "Shadows That Rocks".