Radeon dynamic per-pixel lighting demo

Hi guys!

I’ve written a small demo of volumetric textures in action. It renders dynamic per-pixel lighting with multiple lights using a volumetric texture and dot3 bumpmapping. Cool, hah?
Only Radeon cards will be able to run it at this time.

It’s available here for people that wants to try it.

neat!

This looks great, Humus !
Way to go

hm… the demo looks cool ( the one of your game… i am a nonradeon user so i cant take a look at the other one… would be cool if you can support both… perhaps you can contact me and i do the gf2 version ( and gf3 automatically then… ) )

is it in one pass or two? should be in one possible, not?
tex0 = attentuation3d
tex1 = normalmap
tex2 = normalizedlightdircube

right?

But doesn’t the Gf2 lacks the support for volumetric textures ?
(Ok you could do it without them, but not that efficient)

Lars

its not that mess… you can do it with 2 textures, one 1d and one 2d…

you need 2passes, one for the attentuation and one for the dotproduct, cause you have to less textures in one pass ( on the gf3 you can do it in one then… with the same settings just tex0 = 1d and tex1 = 2d and the rest shifted 1 tex )

gf2:

pass0:
tex0 = 1dzattentuation
tex1 = 2dxyattentuation

pass1:
tex0 = normalizedpointtolight
tex1 = normalmap

but depending on my other question bout the clambing it would be possible doing it without any texture… means in one pass with the pass1 combined…

Just read your question.
But it should be impossible to do real per pixel lighting (think you wan’t to do that) by passing vertexpossition and lightposition to the combiners, and then compute the correct lighting values (that what you do currently on the vertex level). But the computation of the combiners is done only with 9 Bit precision, which is not enough for such complex computations.

But there is a way of doing the lighing in one pass with dot3, by only using one texture.
I do this in my Engine, it works the following way :

  • [li]1. per Vertex compute the light direction vector in local polygon coordinates (for the Dot3 Bumpmapping).[]2. Use the x and z Coords from the transformed light to generate texture coordinates for the attenuation map (just a 2D texture).[]3. Compute the z-Distance from the light to the vertex, by using the y component of the light.[]4. Encode the Dot3 lightvector into the primary colors rgb-Part.[]4. Put the Distance into the alpha-Part

The code for this looks like this

// precompute some values
float rangeMult = (1.0f / light->Range);
float rangeMultHalf = (1.0f / light->Range) * 0.5f;

// Go thru all Vertices
for(int iV=0;iVVertexCount;iV++)
{
VertexFormatDot3 *vertex = &vfd[iV];
D3DXVECTOR3 lightDir = lightCoord-vertex->point;

// Transform the Light into the local coordiantesystem of the current vertex (polygon)
Diffuse.x = -D3DXVec3Dot(&lichtDir,&vertex->s);
Diffuse.z = -D3DXVec3Dot(&lichtDir,&vertex->t);
Diffuse.y = D3DXVec3Dot(&lichtDir,&vertex->sXt);

// compute texture coordinates
uvTexs[iV].x = 0.5f + (Diffuse.x * rangeMultHalf);
uvTexs[iV].y = 0.5f + (Diffuse.z * rangeMultHalf);

// compute the remaining distance
det = fMax(0.0f,1.0f - (Diffuse.y *rangeMult));

D3DXVec3Normalize(&Diffuse,&Diffuse);

// Transform it so that it can be easyly expanded in the combiners
Diffuse = (Diffuse + D3DXVECTOR3(1.0f,1.0f,1.0f))*0.5f;

// Put the light vector and the distance into the color Value
LightVector2Dword(&Mesh->colorVals[0][iV],&Diffuse,det);
}

Now you have all the per vertex information. In the combiners you do your normal bumpmapping, and modulate it with the second texture (attenuation map) and with the alpha value from primary color. My Combinersetup looks like this :

// in the constant color is the color of the light source
glCombinerParameterfvNV(GL_CONSTANT_COLOR0_NV, (float*)&c1);

glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_A_NV,GL_TEXTURE0_ARB, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_B_NV,GL_PRIMARY_COLOR_NV, GL_EXPAND_NORMAL_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_C_NV,GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER0_NV, GL_RGB, GL_VARIABLE_D_NV,GL_ZERO, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerOutputNV(GL_COMBINER0_NV, GL_RGB,GL_SPARE0_NV, GL_DISCARD_NV, GL_DISCARD_NV,GL_NONE, GL_NONE, GL_TRUE, GL_FALSE, GL_FALSE);

glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_C_NV,GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_D_NV,GL_CONSTANT_COLOR0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_B_NV,GL_TEXTURE1_ARB, GL_UNSIGNED_IDENTITY_NV, GL_RGB);
glCombinerInputNV(GL_COMBINER1_NV, GL_RGB, GL_VARIABLE_A_NV,GL_PRIMARY_COLOR_NV, GL_UNSIGNED_IDENTITY_NV, GL_ALPHA);
glCombinerOutputNV(GL_COMBINER1_NV, GL_RGB,GL_SPARE1_NV,GL_SPARE0_NV, GL_DISCARD_NV,GL_NONE, GL_NONE, GL_FALSE, GL_FALSE, GL_FALSE);

glFinalCombinerInputNV(GL_VARIABLE_A_NV,GL_SPARE1_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);

glFinalCombinerInputNV(GL_VARIABLE_B_NV,GL_SPARE0_NV, GL_UNSIGNED_IDENTITY_NV, GL_RGB);

Of course the bumpmapping is not the highest quality, cause you don’t use a normal map, but i never had problems with artifacts yet.

Lars

[This message has been edited by Lars (edited 03-25-2001).]

just a stupid comment while reading your codesnippeds… looks like you have copied them directly out… u use d3dx in your openglcode… funny

for making your code work i need s and t vectirs of the vertex, right? ( and sXt wich is the normal… )

now how do you calculate them? i wrote a codesnipped in the Post Binormals And Tangents ( down somewhere ), but i didnt had the time to try yet, so i’m interested how you calculate em…

for the ati freaks out there… i love your texture3d, do you see the stress we have without? but anyways, i like the registercombiners on the other hand, its great stuff, too… why oh why dont have the geforce3 a texture3d… it would be perfect ( with 4 texturestages and all… wow… )

tex0 = attentuation3d
tex1 = normalmap
tex2 = normalizedpointtolightCUBE
tex3 = diffusemap

registercombiners: tex0 * tex3 * ( tex1 dot tex2 )

simple it is, simple and precious ( when u use HILO for tex1 even more precious… )

Originally posted by davepermen:
[b]its not that mess… you can do it with 2 textures, one 1d and one 2d…

you need 2passes, one for the attentuation and one for the dotproduct, cause you have to less textures in one pass ( on the gf3 you can do it in one then… with the same settings just tex0 = 1d and tex1 = 2d and the rest shifted 1 tex )

gf2:

pass0:
tex0 = 1dzattentuation
tex1 = 2dxyattentuation

pass1:
tex0 = normalizedpointtolight
tex1 = normalmap

but depending on my other question bout the clambing it would be possible doing it without any texture… means in one pass with the pass1 combined…[/b]

I do it with one pass per light, I don’t think you can do it with multiple lights if you need more than one pass / light.

Tex0 = attenuation adjusted normalmap.
Tex1 = DOT3 bumpmap

For each polygon I have a matrix that defines a transform for the texture coords so that I have texture coord generation doing all the work for me, I only need to load the predefined texture matrix for the polygon. Out of the transform I get a 3d coord. If I’d do this without 3d textures (the way I do it) I’d need to do all that stuff myself and then finally choose one 2d slice and do a texture bind on each light on each polygon, not efficient.

Originally posted by davepermen:

for the ati freaks out there… i love your texture3d, do you see the stress we have without? but anyways, i like the registercombiners on the other hand, its great stuff, too… why oh why dont have the geforce3 a texture3d… it would be perfect ( with 4 texturestages and all… wow… )

Yeah, I was really disappointed, I’d love to have a GF3 with 3d textures … I hope ATi can bring up some damn good hardware for the Radeon II, and some good drivers and some great developer support DAMNIT!

yeah, first time a REAL programable pixelpipeline, like the vertex_program, with the same instructions ( and no clambing to 0-1 or -1-1 while processing it… )

would be so nice

(oh, btw, and several loops in the program, or a txl instruction, how ever… some resampling method…)

dreaming

>now how do you calculate them? i wrote a
>codesnipped in the Post Binormals And
>how you calculate em…
http://www.angelfire.com/ab3/nobody/calc_tangent.c

compiles all by it’s self, just make sure to define the vertex structure in your code base, and build that structure by your poly info, then plugg that…
the binormal is just teh cross between the normal and tangent vector.

btw, you need to use the alpha chanell on the diffuse map or your surfaces come out looking kind of funky.

here is an example with no gloss map (alpha on the diffuse) http://www.angelfire.com/ab3/nobody/temp/glossneeded.jpg

here is a scene with the chanell with the changes… http://www.angelfire.com/ab3/nobody/lugi1.jpg

like the alpha chanell is the mask on that diffuse texture…

more inf about it at http://www.angelfire.com/ab3/nobody/pplRendering.html

nothing special…
almost done with new edge tracking code though.

jeeesh, i don’t know why i can’t stop focusing on 3D
the game i’m doing is 2D

-akbar A.

[This message has been edited by kaber0111 (edited 03-25-2001).]

>you need 2passes, one for the attentuation
>and one for the dotproduct, cause you have
>to less textures in one pass ( on the gf3
>you can do it in one then… with the same
>settings just tex0 = 1d and tex1 = 2d and

I think someone also mentioned using dep texture reads …
if you use these, you fragment your consumer base one more time.

and from what i heard dep texture reads are really not all that fast on the geforce3’s.

like Doom is not using them, at least from a few weeks ago…
not sure if it is now…

but honestly, i would care caution in this area…
move to dot3 lighting, and use multiple passes to get the effect.
and your going to want to code support for the extra texture units…

but the dep texture reads…
i’ say no, at least not yet.

okay, back to working on my 2D game

-akbar A.

>geforce3 a texture3d… it would be perfect
>( with 4 texturestages and all… wow… )

use 2 textures to do attentuation.
3D textures are an expensive resource.

bout the ati-way

tex0 = 3d
tex0.rgb = normalizeddirectionvector
tex0.alpha = distanceattentuation

tex1 = 2d
tex1.rgb = normal
tex1.alpha = somethingelse

tex2 = 2d
tex2.rgb = ambienttexture
tex2.alpha = transparence

but you need registercombiners to calculate this i think

Originally posted by kaber0111:
[b]>geforce3 a texture3d… it would be perfect
>( with 4 texturestages and all… wow… )

use 2 textures to do attentuation.
3D textures are an expensive resource.

[/b]

3D texture don’t need to be expensive. In my demo I use a 24bit 64x64x64 texture, which is 768kb … or 1MB if it’s expanded to 32bit. I could use a 32x32x32 too, slightly more banding but not that much … comes at the huge price of 96kb.

neat
> In my demo I use a 24bit 64x64x64 texture,
>which is 768kb … or 1MB if it’s expanded
>o 32bit. I could use a 32x32x32 too,
>slightly more banding but not that much …
>comes at the huge price of 96kb.

maybe you could give us screeenshots with diffrent 3d texture sizes…
so we can see differences…

To davepermen :
Look back into the Thread, my code is right above yours

Lars

[This message has been edited by Lars (edited 03-25-2001).]

Originally posted by kaber0111:
use 2 textures to do attentuation.
3D textures are an expensive resource.

I say use a single 2D texture to do distance attenuation and thus free up a texture unit. I know, calculating 3D distance attenuation from only a 2D texture seems impossible, but I assure you it is quite possible. I described the technique on my site:
http://www.ronfrazier.net/apparition/research/advanced_per_pixel_lighting.html

I quick summary of the technique:
When doing bumpmapping, you need to move the light into tangent space of the poly. Since you are already performing this calculation, you might as well get everything you can our if it. So, you have a tangent space light vector (tx,ty,tz). Next, scale this by the lights radius to get (stx, sty, stz). You can use (stx, sty) to generate your (u,v) for the 2D radial map. Then you can use stz as a constant distance from the light to the polygon (since this is in tangent space for the poly, the stz distance will be the same across the entire poly). Just throw stz into your primary and secondary color, square it in the combiners and you essentially have the value you would have pulled from a 1D map.