per pixel lighting and bumpmapping

Hello!

I’m trying to unterstand (and implement, but thats later) how per-pixel
lighting works and have some problems because I found no tutorial so far
which explains it from the basics.

What i’ve found out so far is:

the lighting equation opengl uses (in simplified form and only
considering diffuse lighting, no specular and ambient):

color = attenuation * max((n dot l), 0) * lightcolor * materialcolor

where n is a normal and l is the light vector.

now to do this stuff per-texel we must have all the stuff for each
texel:

attenuation should be no problem, could be stored in a texture, as well
as normal map (there are some problems here), but how the heck do you
get a per-texel light vector? You can calculate it per-vertex, but I
really have no idea how to do it per-texel.

Then there is a problem that graphics hardware cannot do this max/min
thing, so how can I solve this problem using only multiplies and adds?
As far as I get it its importand for self-shadowing, because if (n dot
l) is <0 then the surface is facing away from the light and should not
be lighted therefore

Now the problem with normal map is that normals must be transormed to do
lighting in eye space and currently only geforce3 features a
texel-matrix to do it.

Then most papers write that therefore lighting must be done in
surface-local space, which means every vertex must have a unique matrix (which I do not understand) which transforms this vertex to (0 0 0) with normal pointing to (0 0 1) but then I do not understand how can they say the normal points to (0 0 1) if its fetched from a texture, anyway the stuff above is what I understood and hopefully someone can explain the rest of the concept.

Thanks in advance,
-Lev

Originally posted by Lev:
Hello!
I’m trying to unterstand (and implement, but thats later) how per-pixel
lighting works and have some problems because I found no tutorial so far
which explains it from the basics.

I’d recommend you “A Practical and Robust Bump-mapping Technique for Today’s GPUs” by Mark Kilgard. It has all you need to understand per-pixel dot3 bumpmapping in theory (and practice with GF cards).
[b]

but how the heck do you
get a per-texel light vector? You can calculate it per-vertex, but I
really have no idea how to do it per-texel.
[/b]

You can get it per-vertex and then interpolate it between vertices as color values.
[b]

Then there is a problem that graphics hardware cannot do this max/min
thing, so how can I solve this problem using only multiplies and adds?
[/b]

Yes, GF hardware can do it all right. And I don’t think you can do realtime dot3 bumpmapping with older NV cards (don’t know about ATI cards and others). However there are some 2D approaches like texture embossing which can be used with older cards.
[b]

Now the problem with normal map is that normals must be transormed to do
lighting in eye space and currently only geforce3 features a texel-matrix to do it.
[/b]

Why “must”? You can light in any space you want. If you calculate lighting in object space, you supply normal, light, halfangle, or other vectors in object space. The same for the surface-local, eye, any other spaces.
[b]

…anyway the stuff above is what I understood and hopefully someone can explain the rest of the concept.
[/b]

There are a lot of people here who can give you a detail explanation on the topic. I think you’ll get some great replies. However to understand the matter comletely you should learn more than you can be told here. So search at nVidia’s site for the paper mentioned above.
Alexei.

P.S. Won’t be afraid of mathematics there. It’s not so complex as it seems at first.

Ok… first i’m no expert; actually i have even never programmed per-pixel operations.

My understanding of your problems are:

  1. It is not possible to specify the light vector per-texel. You must do it per-vertex, and let the rasterizer interpolate it. This causes a problem because, if you speciy normalized light vectors for each vertex, and the light is close to the surface, for points near the light, the vector is no longer normalized. So, you must use a normalization cube-map which renormalizes the interpolated light vector per-texel.

  2. For the max thing, i think it is possible with register combiners. Check the spec about them, some operations are clamped to the [ 0 ; 1 ] range and other operations are clamped to the [ -1 ; 1 ] range.

  3. I don’t think you need a texture matrix. However, you must give the light vector in tangent space ( that’s probably what you call local surface space ). Tangent space is the space formed by the normal at vertex and the directions of the texture U / V axis.

Check out the advanced per-pixel lighting demo at Nvidia’s. It does more than what you want ( shadowing, specular lighting and all ) but there is a very well-done documentation.

Y.

Originally posted by Ysaneya:
It is not possible to specify the light vector per-texel.

Yes, it’s possible. You can store it in a texture.

Also search for Michail Bespalov’s question about register combiners in this forum.
Alexei.

Interesting, but does it still authorize dynamic lights ( that you can move and all ) ?

Y.

aah can someone please explain to me what binormal vector means? (the one used in the surface local matrix)

greetings,
-Lev

the binormal is a vector wich is

  1. normal to the normal at the point/face whatever
  2. normal to the tangent at the point/face whatever

means, its the normalized crossproduct of the two vectors normal and tangent

wait allitle, i will upload a nice bumpmapping demo wich should explain everything next week ok? ( a tutorial… )

Ah, so the tangent, the normal and the binormal vectors build a cartesian coordinate system, right? ok that explains something, thanks for the hint!

-Lev

Originally posted by Ysaneya:
[b]Interesting, but does it still authorize dynamic lights ( that you can move and all ) ?

Y.[/b]

I think that possibility doesn’t mean necessity.

Michail tries to implement generalized per-pixel BRDF lighting (original BRDF-based lighting was per-vertex). Therefore he has to use per-pixel-accurate tangent space. He also chose per-pixel light vector for computations. However this vector(s) doesn’t necessarily need to be per-pixel-accurate. Pixel accuracy is important if a light source is very close to a large polygon, with the distance comparable with bumps height! I think that you can safely use interpolated light vector for most of tasks, especially for games.

As to per-pixel specified light vector, yes, it is tricky. You may think of changing it by changing texture coordinates, with a texture matrix or vertex programs, for example.
Alexei.

You can set the light vector via the pri. color, but it is more acurate to use cube maps as in “A Practical and Robust Bump-mapping Technique for Today’s GPUs” (by Mark Kilgard).

Ok, but it is per-vertex specified.

Davepermen, “the next week” started, I’m impatient for a new bumpmapping demo coming!

Alexei.

[This message has been edited by Alexei_Z (edited 05-14-2001).]

yeah, next week started… and i’m finally at home… so we will see… now i first want to chat around with my girl, but i think i will have this week enough time to nice-write my code so that every one could understand it…letz see, letz see…

more for my girl: http://tyrannen.starcraft3d.net/formygirl
more for opengl: http://tyrannen.starcraft3d.net/

so far so ok…