Dynamic lightmapping demo

Can someone explain to me why this works the way it works? It’s from
a dynanmic lightmapping demo by http://www.slug-production.be.tf/.

Why does it need two glMultiTexCoord2fARB lines, and how are they
being used together?

void ProcessVertex_DL(float x, float y, float z)
{

// Texture Coord. Calcul on X depending the Light Radius
DL_s = (x - Light_X) / (Light_Radius) + 0.5f;

// Texture Coord. Calcul on Y depending the Light Radius
DL_ = (y - Light_Y) / (Light_Radius) + 0.5f;

// Texture Coord. Calcul on Z depending the Light Radius
DL_t = (z - Light_Z) / (Light_Radius) + 0.5f;


// First Texture Unit
glMultiTexCoord2fARB(GL_TEXTURE0_ARB, DL_s, DL_t);

// Second Texture Unit
glMultiTexCoord2fARB(GL_TEXTURE1_ARB, DL_, 0.5f);

// The Polygon (the light)
glVertex3f(x, y, z);

}

Hey,

It’s my website !

To do this we can calculate the distance from the light as:

// Texture Coord. Calcul on X depending the Light Radius
DL_s = (x - Light_X) / (Light_Radius) + 0.5f;

// Texture Coord. Calcul on Y depending the Light Radius
DL_ = (y - Light_Y) / (Light_Radius) + 0.5f;

// Texture Coord. Calcul on Z depending the Light Radius
DL_t = (z - Light_Z) / (Light_Radius) + 0.5f;

For texture GL_TEXTURE0_ARB, we need a 2D texture that maps coordinates (x,y) to a single value. For texture GL_TEXTURE1_ARB, we need a 1D texture that maps the coordinate z.

For more informations, please take a look here :
http://www.ronfrazier.net/apparition/

My dynamic lightmapping demo is based on this.

Yes it’s PPL but without PPL, ir’s Dynamic Lightmapping

Thanks for the link, your demo makes more sense.

Ok, I see now. So you didn’t exactly use a 1D texture, you just took a
2D texture and converted it to 1D by setting it’s tv coordinate to 0.5.

Why does the lightmap completely disappear if I comment out the second
glMultiTexCoord2fARB line? Just wanting to see how these two lines
contribute separately to the final lightmap.

[This message has been edited by gator (edited 09-26-2003).]

Originally posted by Leyder Dylan:
[b]For more informations, please take a look here :
http://www.ronfrazier.net/apparition/

The correct link is :

http://www.ronfrazier.net/apparition/research/per_pixel_lighting.html

My dynamic lightmapping demo is based on this.

Yes it’s PPL but without PPL, ir’s Dynamic Lightmapping[/b]

First, I’ve updated the demo because I forgot a letter in a variable :

// Texture Coord. Calcul on Y depending the Light Radius
DL_ = (y - Light_Y) / (Light_Radius) + 0.5f;

becomes (I’ve just add a “r” letter) :

// Texture Coord. Calcul on Y depending the Light Radius
DL_r = (y - Light_Y) / (Light_Radius) + 0.5f;

Read this :

Note that we scaled each distance by dividing it by the light radius R. This allows us to ensure that distances from -R to R lie in the -1 to 1 range. The next step is to map these (x0, y0, z0) distance which lie in the -1 to 1 range into (s, t, r) texture coordinates which lie in the 0 to 1 range. To do so we calculate:

                s = x0/2 + 0.5
                t = y0/2 + 0.5
                r = z0/2 + 0.5

Then, we can use coordinates (s,t) as the texture coordinates for the 2D Texture 0, and use the (r) coordinate as the texture coordinate for the 1D Texture 1. Finally, now that we have the Intensity of the light calculated, we can multiply this by the color of the light to get the distance attenuated light value for the current pixel.

So, simply for responding to your question, the first texture unit (0) is used for the X and Y coord. and the second texture unit (1) is used for thz Z coord.

Hope this can help you anymore.

And finally, a last question :

What do you think about my programs ?

Hi Leyder Dylan

nice prog’s but i’m a unhappy about HL model loading coz you are useing HL SDK and
i hope to learn some more about bone anim (the HL code is very complexity)

I saw the dynamic lightmapping demo but i think theres something wrong cmp it with your
PPL Demo the light at the bottom and top are ok but what happen with the wall’s?
( quadratic cycle lightmapping (-; )

simply idea -> what about static lightmapping with these technology
1st calc vertex light (color for the lm) for every vertex and store it
2nd calc s/r/t texture cord and store them for every vertex
can this work?

PPL Demo is nice but what happen if i don’t have a NV card?

LB

>>What do you think about my programs ?

What I first liked about it is that it doesn’t use Nvidia extensions, so it runs
on my ATI card.

I understand the ProcessVertex_DL a bit more. The lightmap looks like a spot on
the ceiling and floor, but it only looks like a column on the walls. So it is
not exactly perfect, but good enough for fast lighting I guess.

Do you know what I’m talking about here?

Why does the lightmap completely disappear if I comment out the second
glMultiTexCoord2fARB line? Just wanting to see how these two lines
contribute separately to the final lightmap.

Still there?

Did you understand my question?

I’m still scratching my head, wondering why your lightmap disappears if I comment out the
second set of texture coordinates.

Yes, I told you that the second texture unit is used for z coord.

The first one, for x, y.

I don’t know if it’s possible to change it with a 3D texture. No tryed yet.

Ah, thanks anyway, I don’t think you understand what I’m trying to do.
I’m going to log everything and try to figure out why your lightmap is
disappearing.

That demo has a few issues, in my humble opinion…

Originally posted by gator:
The lightmap looks like a spot on
the ceiling and floor, but it only looks like a column on the walls. So it is
not exactly perfect, but good enough for fast lighting I guess.

The “lightmap” he’s using is bad (the technically correct term is “attenuation map”, BTW). It doesn’t fully fade to black at the border, so texture edge clamping causes the “column” to appear.

Also, it’s not very smart to load the exact same 2D texture twice and to just call one “2D” and the other “1D”. If you want 1D, build a real 1D texture. Or at least only load the image into memory once – it’s perfectly possible to bind the same texture to two different units at the same time.

Also note that the demo deviates from the theory in Ron Frazier’s tutorial linked above. The demo uses attenuation maps that contain a bright spot on a dark background, and draws everything using multiplicative blending (Att1D * Att2D * Scene). Ron Frazier’s tutorial has inverted attenuation maps (dark spots on a white backdrop) and renders (1 - (Att1D + Att2D)) * Scene. Both approaches work, but keep in mind that there’s big difference if you want to read the tutorial while referring to this demo’s source code.

Finally, doing the texcoord calculations manually is not very scalable. It can be done with glTexGen() or with a vertex program, both of which are much more efficient than doing it yourself.

Gator, the reason the lighting “disappears” when you set the second texture unit’s texcoords to zero is that setting them to zero will move the texture samples to the bottom left corner of the 2D texture, which is black. Hence, multiplying the two maps will result in… black.

For another overview of the technique, look at http://developer.nvidia.com/attach/1756 and read the section on point light attenuation (page 32). For additional source code, I have two demos that use the technique on my site: the tangent-space bumpmapping one and the stencil shadow volumes one. The former uses a vertex program to generate the texcoords and register combiners to combine the maps, so it’s GeForce-only. It also contains an alternative implementation based on a 3D texture. The latter uses standard glTexGen() and glTexEnv() to get the job done, so it works on all cards.

– Tom

Tom,

Do u work with VR Context ?

Last year, I’ve receive a mail from them for working with them because they saw my T3D Converter.

No importance, but thanks for your advice.

Also note that the demo deviates from the theory in Ron Frazier’s tutorial linked above. The demo uses attenuation maps that contain a bright spot on a dark background, and draws everything using multiplicative blending (Att1D * Att2D * Scene). Ron Frazier’s tutorial has inverted attenuation maps (dark spots on a white backdrop) and renders (1 - (Att1D + Att2D)) * Scene. Both approaches work, but keep in mind that there’s big difference if you want to read the tutorial while referring to this demo’s source code.

For the dark spots on a white backdrop, I know because his demo is for PPL and mine not for that.

But, it’s simply a dmeo, feel free to modify it as you wish.

Gator, the reason the lighting “disappears” when you set the second texture unit’s texcoords to zero is that setting them to zero will move the texture samples to the bottom left corner of the 2D texture, which is black. Hence, multiplying the two maps will result in… black.

I’m not setting the value to zero, I’m commenting it out. And I figured out why it was disappearing.
I had to completely disable the second texture unit in the main program. I guess enabling the
second texture unit, but not sending it anything, will still affect the first texture unit.

Now, I can see how each texture unit contributes to the final lightmap. It makes more sense.

Leyder Dylan:
I didn’t like the fact that you blended the background color into everything else. I don’t why
you are doing this.

Here, this is much better:

//
//
//
int
DrawGLScene(GLvoid) {

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clean the screen, the depth buffer
glLoadIdentity(); // Reset The Projection Matrix

if(pass1_flag) {

glDisable(GL_BLEND);

    // FLOOR TEXTURE

glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Texture_TGA[0].Texture_ID );

glBegin(GL_QUADS);
// The Bottom Quad **********************
glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,0);
glVertex3f(-5.0f, -4, -10.0f);

  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,0);
  glVertex3f(5.0f, -4, -10.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,1);
  glVertex3f(5.0f, -4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,1);
  glVertex3f(-5.0f, -4, -20.0f);
  // The Top Quad *************************
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,1);
  glVertex3f(-5.0f, 4, -10.0f);

  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,1);
  glVertex3f(5.0f, 4, -10.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,0);
  glVertex3f(5.0f, 4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,0);
  glVertex3f(-5.0f, 4, -20.0f);

  
  // The Left Quad ************************
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,0);
  glVertex3f(-5.0f, -4, -10.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,0);
  glVertex3f(-5.0f, -4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,1);
  glVertex3f(-5.0f, 4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,1);
  glVertex3f(-5.0f, 4, -10.0f);

  
  // The Right Quad ***********************
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,0);
  glVertex3f(5.0f, -4, -10.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,0);
  glVertex3f(5.0f, -4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,1);
  glVertex3f(5.0f, 4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,1);
  glVertex3f(5.0f, 4, -10.0f);

  
  // The Back Quad ************************
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,0);
  glVertex3f(-5.0f, -4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,0);
  glVertex3f(5.0f, -4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 1,1);
  glVertex3f(5.0f, 4, -20.0f);
  
  glMultiTexCoord2fARB(GL_TEXTURE0_ARB, 0,1);
  glVertex3f(-5.0f, 4, -20.0f);

glEnd();

glActiveTextureARB(GL_TEXTURE0_ARB);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0 );

glDisable(GL_BLEND);
}

if(pass2_flag) {

// this directly adds the texture to the frame buffer
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);

glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Texture_TGA[1].Texture_ID);

glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, Texture_TGA[2].Texture_ID);

// The Per Pixel Lightning
glBegin(GL_QUADS);
// The Bottom Quad **********************
ProcessVertex_DL(-5.0f, -4, -10.0f);
ProcessVertex_DL(5.0f, -4, -10.0f);
ProcessVertex_DL(5.0f, -4, -20.0f);
ProcessVertex_DL(-5.0f, -4, -20.0f);

  // The Top Quad *************************
  ProcessVertex_DL(-5.0f, 4, -10.0f);
  ProcessVertex_DL(5.0f, 4, -10.0f);
  ProcessVertex_DL(5.0f, 4, -20.0f);
  ProcessVertex_DL(-5.0f, 4, -20.0f);

  // The Left Quad ************************
  ProcessVertex_DL(-5.0f, -4, -10.0f);
  ProcessVertex_DL(-5.0f, -4, -20.0f);
  ProcessVertex_DL(-5.0f, 4, -20.0f);
  ProcessVertex_DL(-5.0f, 4, -10.0f);

  // The Right Quad ***********************
  ProcessVertex_DL(5.0f, -4, -10.0f);
  ProcessVertex_DL(5.0f, -4, -20.0f);
  ProcessVertex_DL(5.0f, 4, -20.0f);
  ProcessVertex_DL(5.0f, 4, -10.0f);

  // The Back Quad ************************
  ProcessVertex_DL(-5.0f, -4, -20.0f);
  ProcessVertex_DL(5.0f, -4, -20.0f);
  ProcessVertex_DL(5.0f, 4, -20.0f);
  ProcessVertex_DL(-5.0f, 4, -20.0f);

glEnd();

glActiveTextureARB(GL_TEXTURE1_ARB);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0 );

glActiveTextureARB(GL_TEXTURE0_ARB);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0 );

glDisable(GL_BLEND);
}

//////////////////////////////////////////////////////////////////

DrawLightSphere();

return 1;
}

//
//
//
void
DrawLightSphere() {

// Disable texture and blending
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);

// We draw a point at the Light Position
glPushMatrix();
glTranslatef(Light_X, Light_Y, Light_Z);
glutSolidSphere(0.1f,10,10);
glPopMatrix();

glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
}

Originally posted by gator:
I’m not setting the value to zero, I’m commenting it out. And I figured out why it was disappearing.
I had to completely disable the second texture unit in the main program. I guess enabling the
second texture unit, but not sending it anything, will still affect the first texture unit.

OpenGL is a state machine. Any given bit of state keeps its default value until you explicitly change it. By commenting out the glTexCoord() call, you made it so that the app never changes the texture coordinates for the second unit. Hence, OpenGL always uses the default values for all vertices. The default texture coordinate values are (0, 0, 0, 1). Commenting out the texcoord calls is the same as replacing them with (0, 0).

Now when you commented out those calls, the texture unit itself still remained active! Hence, the GL was still reading the second texture using the default texcoords of (0, 0), and applying the result (black) to the result of the previous texture unit. By effectively disabling the second unit as you did later on, the GL doesn’t attempt to sample the texture anymore, and so the second unit doesn’t affect the output of the first.

– Tom

Hi all
I tried a lot to make my lighting works with 2 textures like that…and nothing…i did my lighting with reg combiners and texture 3d (depending on software), but i would like to have it done with 2 tex (2d and 1d)…if anybody got it working (an little example with source) please post it!!! thanx (and sorry for my english…)

My demo can not help for the beginning ?

After read and read and read again the post, I’ve checked the source code and I’ve found an error.

When I use the 2D texture, no problem but for the 1D texture, I forgot to generate an 1D texture in the TGA loader.

Will be fixed today or in a few days.