dot3 bump mapping quality

Hi all! This is my first post!

I’ve been working on the dot3 bump mapping with a geforce4 ti4200, and I have some questions about the quality of the effect.

I’ve tested only dot3 in combiners with normalization, which I think is VERY poor for long area polygons, and I doubt that this is the method used on some games, like motogp or doom3.

Which is the best quality bump mapping ? :

  1. a vertex program transforming lightvec, half angle vector to tangent space, normalization and dot3 in the combiners

  2. a vertex program transforming lightvec, half angle vector to tangent space, normalization with two cubemaps, and dot3 in the combiners

  3. a vertex program transforming lightvec, half angle vector to tangent space, dot3 via a texture shader (normalization required?)

  4. supply lighvec and half angle by vertex (in tangent space), calculated on the CPU, then combiners + normalization cubemaps

I have some question about texture shaders, because I have seen some nvidia demos without normalization using texture shaders, but I suposse they are nvidia specific demos, can anyone tell something about that.

I want to test all the possibilities and I will post my results here, as soon as I finish them

See you.

One more thing…

When I said quality I’m talking about the quality of the specular highlight, not the diffuse, which is really good.

I think the lose of quality is because the halfangle vector is calculated in a vertex program, and twice normalized on the vertex program, then normalized in the combiners, and 4 times squared…this takes me to a highlight that seems a 8 bits color highlight, and nearly linear on big and close to screen polygons…

Thanks all.

For high-quality specular highlights, you may use HILO textures available on NVIDIA hardware (GF3+ I think)

There should be no difference between CPU and vertex program vectors.

I would transform the normal vector (read out of the normal map) into light (object) space, instead; this allows you to do per-pixel envmap. This takes 3 dp ops so it’s marginally doable on GF3; doable on Radeon 8500; and expected on anything with ARB_fragment_program :-).

If you get an interpolated value out of your normal map, and it’s not of super-high resolution, you may need to normalize after you read it; something like read->normalize->transform-into-object-space. That level of dependency may need ARB_fragment_program hardware, unless you’re OK with storing intermediates in a render target.

I agree but then in object-space HILO textures are not available. HILO should be used in tangent-space only, afaik.

Hi!

Why HILO textures are not available in object space? Can you explain that?

I have tested that there is no difference between transforming light vector to tangent space in vertex program or in the cpu, but I can see big difference between normalization in register combiners or cubemaps.

The cubemap normalized version gives me more precision in the contour of the specular highlight, but less precision in terms of colour degradation. Seems more 8 bits color interpolation than before.

See you.

The dot3 bumpmapping used in doom III engine , far cry and tenebrae ( a quake modification) are all based in CPU lightvec and half-angle tangent space transformation and specific (i.e NV_register combiners) or generic hardware(GL_DOT3_RGBA_EXT) combiners.
The specular highlight is majorly done multipass , (Carmack said that the R300 and RV 35 chipsets , can support single pass ones,and achieve great results).
Then they use a pixel shader and stencil buffer to enhance shadows even further …
but ithink that your major disregards on dot3 is due to the textures u´re using, you need a normal map , and a high quality texture to make your specular highlights look good and shine …
One question do you live in Brazil ?

to be more spefcific , i guess that doom III uses only ARB2 extension and vendor specific extension combiners , but tenebrae uses ARB generic dot3 ext.

HILO may be solely used in tangent space because HILO, as it sounds, has 2 components only : X and Y. Then the Z component is computed automatically, considering that (X,Y,Z) is a unit vector and considering that Z is positive (if Z can be negative, there are up to 2 solutions to the equation).

In tangent-space, the Z component is always positive. That’s perfect for HILO’s positive Z assumption. But in object-space, the Z component may be negative as well as positive, so HILO can not work (unless you have a very special mesh that ensures all perturbed normals have a positive Z, in object-space).

Vincoof,

I don’t see the problem. The normal map still stores a tangent-space relative normal. That’s what you fetch out of the texture. Then you use the fragment program to transform that normal into object space – that’s after the texel has been fetched.

Hi!

Thanks about the HILO explanation, but I still don’t know why normalization in cubemap gives me more perfect highlight, but it lose quality in the color scale…any idea?

Thanks!

Hi raverbach.

About GL_DOT3_RGBA_EXT, I still don’t know how to use that, just because there is no way of normalizing the lightvec or halfhangle vector in texture shaders (per pixel) before the DOT3…or there is?

I live in spain.

See you!

And still more stupid questions…

Is there any way the geforce4 can interpolate vectors in angular coordinates? not as separate r,g,b or s,t,q???

Thanks all.

About GL_DOT3_RGBA_EXT, I still don’t know how to use that, just because there is no way of normalizing the lightvec or halfhangle vector in texture shaders (per pixel) before the DOT3…or there is?

nVidia has a really good number of demos and papers on tangent-space bump mapping. They explore all the problems of the method and solutions to those problems.

In this case, you should use a “renormalization” cube map. You pass your 3-vector as a texture coordinate that, when looked up in this cube map, produces a color-vector in that direction which is normalized. This color-vector can be used to do your dot-product.

Hi!

I have read somewhere on the forum that the only way to do with the half vector is calculating per vertex the POINT TO LIGHT vector and the POINT TO EYE vector, then calculating the half angle ON THE COMBINERS, and normalize it throught an aproximation. This seems to be my problem.

I will probably test reflected light vector in tangent space, which is :

Rx = -Ly;
Ry = Lx;
Rz = Lz;

And then doing: normalize(dot(R,ToEye))^n , in the combiners.

I hope this gives better results, and helps someone else.

Thanks.

One more thing:

I still don’t know why NVIDIA on their Cg demos use normalization of Half vector per vertex, because that’s incorrect.

See you.

jorge1774, I’ve read your “half vector in pixel shading” thread too, and I’m afraid you are trying to achieve effect which is impossible on GF3/4.

Carmack in his plan (2003-01-29) explained it very nicely:

(…)Per-pixel reflection vector calculations for specular, instead of an interpolated half-angle. The only remaining effect that has any visual dependency on the underlying geometry is the shape of the specular highlight. Ideally, you want the same final image for a surface regardless of if it is two giant triangles, or a mesh of 1024 triangles. This will not be true if any calculation done at a vertex involves anything other than linear math operations(my highlight). The specular half-angle calculation involves normalizations, so the interpolation across triangles on a surface will be dependent on exactly where the vertexes are located. The most visible end result of this is that on large, flat, shiny surfaces where you expect a clean highlight circle moving across it, you wind up with a highlight that distorts into an L shape around the triangulation line.

About Nvidia demos and texture-shaders: texture-shaders lets you achieve very good quality (smooth, high exponent) specular highlights, but only in following situations:

  1. infinite light and infinite eye (DOT_PRODUCT_CONST_EYE_REFLECT_CUBE_MAP)
  2. infinite light and local eye (DOT_PRODUCT_REFLECT_CUBE_MAP)
  3. local light and infinite eye (DOT_PRODUCT_REFLECT_CUBE_MAP, if you swap E and L in Phong formula)

But what we want is “local light and local eye”, and this is unfortunately impossible with TS.
Nvidia demos look nice because either:

  1. most of them use infinite lights only (so lighting can be done with full precision per-pixel in TS)
  2. they use very highly tesselated models (so lighting can partially done in VP per-vertex, because high tesselation can hide distortions caused by interpolation). See Cg demo “bump_reflect_local_light”, and try to magnify model as much as possible, then you’ll see something is fake (interpolated) in specular highlight.

Of course, neither of 2 above methods can be used in typical FPS scene. You are screwed, all you can do is low-precision low-exponent specular as in Doom3.

The graphics cards just interpolate values. They don’t care (much) if they’re vectors, or colors, or bank account balances, per vertex.

If you want to interpolate in spherical coordinates, just stuff in spherical coordinates, and they will get interpolated. However, then it’s up to you to use those spherical coordinates appropriately in your fragment program or register combiners, which may be tricky.

The real quality problem is interpolating normalized vectors vs the precision & range of unnormalized vectors(positions), NOT lerp vs slerp. The fragment based data types on a lot of hardware lack the precision & range to store local positions accurately. Typically they are used to store normalized positions and the interpolation of normalized positions is simply wrong. Not because it’s linear but because it plain points in the wrong direction during the interpolation that a linear interpolation would get right on unnormalized position vectors. A spherical interpolation would NOT get this right. You can compromise by trying to squeeze unnormalized data into the piddly fragment types and scaling or renormalizing after interpolation but it leads to more quantized vector artifacts etc. Subdivision works because there are more correct normalized vertex vectors to better approximate the correct local vector to position direction.

What I’ve described is still fundamentally different from a vector spherical interpolation, but that’s OK because a spherical interpolation is not needed and wouldn’t fix this problem.

I think I see why this has been called spherical data but I think that’s slightly missleading although I kinda agree it’s spherical in nature since it radiates spherically from the position in 3D, it’s pre interpolation normalization vs post interpolation normalization, and there are some intermediate options for this too. The underlying problem is/was the inherent hardware precision and range limits for interpolating fragment triplets. This problem has largely gone away with better hardware, but if you rip off an older vertex program that feeds normalized color vectors to the fragment program you’ll still be wrong even on the best hardware.

I’m puzzled by anyone calling this “spherical coordinates” though. It’s a 3D position, nothing more, ultimately it’s linear interpolation doesn’t give you a slerp, it’s a hyperbolic function akin to perspecive correction (I think) What ends up on the correct polygon is a section through a spherical field. It could be done with 3D texture coords and 3D texture holding a sphere of vector data, this has been discussed before though.

[This message has been edited by dorbie (edited 03-15-2003).]

Hi again!

First…where is carmack’s plan?

Can someone explain that again :

  1. infinite light and infinite eye (DOT_PRODUCT_CONST_EYE_REFLECT_CUBE_MAP)
  2. infinite light and local eye (DOT_PRODUCT_REFLECT_CUBE_MAP)
  3. local light and infinite eye (DOT_PRODUCT_REFLECT_CUBE_MAP, if you swap E and L in Phong formula)

Specially the first and last one, because the concept of infinite eye points me to something I can imagine…except plain 2D (which is not the case)

I want to archieve correct highlight with infinite light and local eye, so you said can be done perfect in texture shaders, and I don’t think so. That’s because I still haven’t seen an nvidia demo that archieves good precision on large polygons…can you point me to one?

And about spherical coordinates, can someone point to a previous thread about that? I try to find it, but I didn’t success.

I think too, that much of the wrong highlight comes from interpolation lack of precision, and interpolating separately position and a vector which involves the position itself (I still have to think about that) . Maybe in future hardware it will be possible to find real half hangle vector or reflected by pixel, throught the 3D position in the raster…

I still think that spherical coordinates can give you more precision, or at least more than normalization in combiners (I have to think on normalization cubemaps), but don’t really know if this can be possible. Why? Only because I have a BIG lack of information about the card, and the way it interpolates coordinates…and exactly, how the perspective correction is done (not in software, but in Geforce3/4), because it would be impossible interpolate spherical without that information (not only vector, but vector + angle or something similar in a way I still don’t know)

And one more thing…linear interpolation of scalar values on the raster are all done with perspective correction? I’m assuming this is true for a geforce 3+ …

Thanks all!