shadow mapping - pbuffers

hey there,
does anybody happen to know of any simple examples of using pbuffers to render shadow maps in hardware? i already have pbuffers set up and working properly, so i’m really only concerned with the shadow mapping code at this point. i downloaded the shadow mapping src code & white paper from nvidia’s site, but neither talk about pbuffers, and the src code is hard to sift through as it has support for a huge number of options. i have all the necessary extensions, so i’m not concerned with handling things in software. anybody know where i could find a nice stripped down example?

EDIT: to be even more specific, i already render the scene from the point of view of the light into the pbuffer. and then i later render the scene from the viewer’s point of view into the window’s depth buffer. i’m just looking for some code that compares the two and generates the shadows.

[This message has been edited by lost hope (edited 11-06-2003).]

Do you know how to do projective texturing?

First step is to project light-space depth map on your scene using a compatible frustum, then you have to enable per-fragment depth compare func.

Uhm, I don’t remember how this is called in fixed-function pipe but I guess it’s specified in ARB_shadow.

I didn´t try shadowmapping yet, but this tutorial seems to be quite good: http://www.paulsprojects.net/tutorials/tutorials.html

Jan.

Is this what you are looking for?

glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL );

As texture coordinates, you toss in your vertex transformed into the space of your light.

Jonas

hey guys,
yeah, jan2000, that’s actually the tutorial i’m working off now. however, i just can’t get my code to work correctly. for example, here is my scene:
http://www.graphics.cornell.edu/~letteh/ogl/Unshadowed.jpg
here’s the scene w/ shadow volumes (just so you see roughly what it should look like):
http://www.graphics.cornell.edu/~letteh/ogl/ShadowVolume.jpg
here’s the depth map for shadow mapping:
http://www.graphics.cornell.edu/~letteh/ogl/DepthMap.jpg
and here’s my final image when i try using shadow mapping:
http://www.graphics.cornell.edu/~letteh/ogl/ShadowMapped.jpg

so clearly something is wrong

here’s some of my code:

glPushAttrib(GL_ENABLE_BIT);

double textureMatrix [16];
double tempMatrix [16];
double biasMatrix = {0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0};

matrixMultiply(biasMatrix, lightProjMatrix, tempMatrix);
matrixMultiply(tempMatrix, lightViewMatrix, textureMatrix);

double vec1 = {textureMatrix[0], textureMatrix[4], textureMatrix[8], textureMatrix[12]};
double vec2 = {textureMatrix[1], textureMatrix[5], textureMatrix[9], textureMatrix[13]};
double vec3 = {textureMatrix[2], textureMatrix[6], textureMatrix[10], textureMatrix[14]};
double vec4 = {textureMatrix[3], textureMatrix[7], textureMatrix[11], textureMatrix[15]};

//--------------------------------------------------------------
// CONFIGURE THE EYE PLANES
//--------------------------------------------------------------
glTexGeni (GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
glTexGeni (GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
glTexGeni (GL_R, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
glTexGeni (GL_Q, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);

glTexGendv (GL_S, GL_EYE_PLANE, vec1);
glTexGendv (GL_T, GL_EYE_PLANE, vec2);
glTexGendv (GL_R, GL_EYE_PLANE, vec3);
glTexGendv (GL_Q, GL_EYE_PLANE, vec4);

glEnable (GL_TEXTURE_GEN_S);
glEnable (GL_TEXTURE_GEN_T);
glEnable (GL_TEXTURE_GEN_R);
glEnable (GL_TEXTURE_GEN_Q);
//--------------------------------------------------------------

//--------------------------------------------------------------
// BIND, ENABLE, AND CONFIG SHADOW MAP TEXTURE
//--------------------------------------------------------------
glEnable (GL_TEXTURE_2D);
glBindTexture (GL_TEXTURE_2D, shadowMapTexID);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE); // Enable shadow comparison
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL); // Shadow comparison should be true (ie not in shadow) if r<=texture
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE_ARB, GL_INTENSITY); // Shadow comparison should generate an INTENSITY result
//--------------------------------------------------------------

//--------------------------------------------------------------
// ENABLE THE ALPHA TEST TO DISCARD FALSE COMPARISONS
//--------------------------------------------------------------
glAlphaFunc (GL_GEQUAL, 0.50f);
glEnable (GL_ALPHA_TEST);
//--------------------------------------------------------------

//---------------------------------------
// Render captured geometry
//---------------------------------------
renderCapturedGeometry(0, true, LIGHTS_ENABLED, true);
//---------------------------------------

//---------------------------------------
// Render MDL file meshes
//---------------------------------------
for (int i=0; i < (int)ssMeshes.size(); ++i)
ssMeshes[i].renderMesh(true, LIGHTS_ENABLED);
//---------------------------------------

glPopAttrib();

there are a few things i’m unsure of (and they may be the cause of my troubles).
A) the vectors i’m passing in to the eye planes might not be correct
B) i don’t really understand what exactly GL_INTENSITY sets (and the registry page doesn’t explain it very well at all).
C) i don’t understand how in shadow mapping you need to re-render your scene w/ the alpha test enabled and using GL_TEXTURE_COMPARE_FUNC_ARB with the depth texture – what if your scene has textures in it? when i call the render functions for my objects, they would traditionally (though not in this case), need to bind their own textures for rendering. but then the depth texture is no longer bound. so how can this be? how can this depth texture comparison be done if you need to change to a different texture to render the object? maybe i’m missing something vital here.

thanks.

alright, so i was able to fix the color issue (so now they actually look like shadows) – i just added:

glTexEnvi (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);
glTexEnvi (GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB, GL_MODULATE);
glTexEnvi (GL_TEXTURE_ENV, GL_COMBINE_ALPHA_ARB, GL_REPLACE);

Then, by setting a polygon offset, I was able to remove most of the strange patterns.
Here’s what it looks like now:
http://www.graphics.cornell.edu/~letteh/ogl/Shadows.jpg
I get relatively correct shadowing in regions that are visible to the light, however, in any area that isn’t visible to the light when rendering the shadowmap, I get strange results. What happens in this case, and how can this be avoided? It’s not really feasible for me to separate out the geometry that the light can see, and only render that in the final stage. Is there any way for me to specify: “if the projected point doesn’t fall in the shadowmap, render it as completely lit?”

Also, despite the polygon offset I’m using, I still get some errors on the upper portion of the teapot. I’ve tried numerous offset values, but it doesn’t seem to go away. Is there a good method for adjusting the offset values such that this sort of thing won’t happen?

i don’t understand how in shadow mapping you need to re-render your scene w/ the alpha test enabled and using GL_TEXTURE_COMPARE_FUNC_ARB with the depth texture – what if your scene has textures in it? when i call the render functions for my objects, they would traditionally (though not in this case), need to bind their own textures for rendering. but then the depth texture is no longer bound. so how can this be? how can this depth texture comparison be done if you need to change to a different texture to render the object? maybe i’m missing something vital here

I think this is more a fill rate optimization which is particularly useful because the tutorial you were looking at used multiple passes, rather than multitexturing.

As for using your own textures, you can use multitexturing (ie. just use an additional texture unit) or you can use an additional pass (I think - can’t say I’ve ever done it). I believe you would do a final pass, after all previous passes to add in the texturing (I could be wrong though - jwatte would certainly know as would a few others that frequent this forum).

One thing that has been mentioned relatively recently on this forum is a way to minimize your artifacts. The technique involves rendering the back faces, rather than the front faces, when you create your depth map. Personally I haven’t had much success with this technique but it has been recommended many times - and I can’t play around with it currently as shadow buffers aren’t working with my current drivers (det52.16/74).

hey, thanks for the ideas – multitexturing would certainly make sense (having never used it, i didn’t even think of that). as for the shadowing errors, i actually tried messing around w/ rendering the back faces instead of the front – but i ended up getting the same noise… only this time in the shadow region! in the tutorial, he says something about it not mattering when it happens in shadowed regions… but it sure seems to matter, as false positives give speckles of light in the dark areas?

[This message has been edited by lost hope (edited 11-11-2003).]

but i ended up getting the same noise… only this time in the shadow region! in the tutorial, he says something about it not mattering when it happens in shadowed regions… but it sure seems to matter, as false positives give speckles of light in the dark areas?

It shouldn’t matter. The reason it shouldn’t matter is because backfacing polygons (relative to the light) have a negative dot(N,L). As such, your lighting computation should result in the same darkness as if it were in shadow.

the “lighting computation” is just to re-render the scene, this time with the alpha test enabled. when a pixel fails the alpha test, the pixel is in shadow, as its distance from the light was greater than the point seen by the light. by rendering w/ the alpha test like this, i don’t see where the dot product of the face’s normal with the light vector comes into play. i know what you’re saying (in that the dot prod is negative when the face points away from the light), but i don’t see how i can use that in my situation. am i missing something?

First of all, regarding Korval’s suggestion. You have a light source, and a shadow map. The lighting is calculated as dot(N,L), which is negative when L is backfacing to N. In this case, it will get clamped to zero and the surface will be black. The shadows are calculated through shadow mapping, produce a 0 or 1 result and modulate the lighting. So the final color is dot(N,L)*shadow. Since you know that backfaces are always in shadow, the noise you’re seeing on the backfaces should be eliminated by the multiplication with dot(N,L), because noise * 0 = 0.

Second, to kill the artefacts outside of the spotlight cone as shown in your last screenshot, you can set the wrap mode of your shadow map to CLAMP_TO_BORDER. Set the border color to black, so that your depth comparisons always return false.

– Tom

hey tom, thanks for clearing that bit up about the light – i see what you and korval were talking about now! i appreciate both your help. also, clamping to the border with GL_CLAMP_TO_BORDER_ARB seems to work for the most part, however, unless i initially clear the depth texture to white, i still get a very thin dark line running through the scene where the border of the texture would be. i guess it doesn’t matter, since by clearing to white i was able to fix it, but i’m not sure why that would happen. so the only thing wrong now is that i still get errors directly behind the light. i assume this is because those fragments will project into the shadowmap – only behind it. and thus they are getting values. here’s a picture of the problem:
http://www.graphics.cornell.edu/~letteh/ogl/BehindLight.jpg
is there any way to avoid this from happening?

This is usually solved by creating a 1D texture that’s half black and half white (two texels is enough, but switch off linear filtering). Use TexGen to make everything in front of the camera map to the white part, and everything behind it to the black part. Multiply this texture with the rest of your lighting and shadowing, and the back projection errors are gone.

– Tom

ack! so i implemented the edge clamping in the lab today… and all was fine. and then i came home and ran the code on my pc. at home i have a geforce 4 ti4600 with the latest drivers from nvidia. with GL_CLAMP, the shadowmapping code takes ~0.0005 seconds, and with GL_CLAMP_TO_BORDER_ARB it takes roughly 1.3 seconds! what’s going on here?! craziness =/

[This message has been edited by lost hope (edited 11-12-2003).]

alright, so it gets stranger . when i don’t use the extension WGL_ARB_render_texture with my pbuffer, and i copy the image as seen by the light from the pbuffer to my shadowmap, it suffers from the huge hit i described previously when using GL_CLAMP_TO_BORDER_ARB. however, if i use the WGL_ARB_render_texture extension, then using GL_CLAMP_TO_BORDER_ARB appears to have no time penalty at all. is there a good explanation for all this madness? heh.

[This message has been edited by lost hope (edited 11-12-2003).]

anyways, getting back to the errors behind the light – i’m not sure exactly how to implement what you described. so i’ll have to use multitexturing, with another pass to check if the fragment is behind the light’s frustum. but how would i set up glTexGen to make that type of comparison? also, since i initially render the ambient lighting in the scene, and then on this second pass i’m rendering in the rest of the light (diffuse, specular), i want all the fragments behind the light to automatically pass this test (be fully lit). the way you described it, tom, the fragments behind the light would fail (have alpha set to 0), and thus not be colored (if you look in the image, they are too dark). however, if you reverse what you described, then everything in front would fail. i want everything behind the light to pass, and everything in front to pass if it has a depth less than the shadow map. is this possible?

though i imagine most of you don’t care, i was able to get it working! in the event it’s of use to anyone, i thought i’d post some code that fixed the problem.

so in order to determine if the point was behind the light, i created a 1D texture as follows:

// Texture pixel buffer (only 2 pixels w/ RGBA each)
	char*			buffer1D = new char[8];
	// Pixel behind the light		-> add 1.0 to alpha
	buffer1D[0]=(char)0x00;	buffer1D[1]=(char)0x00;	buffer1D[2]=(char)0x00; buffer1D[3]=(char)0xFF;
	// Pixel in front of the light	-> add 0.0 to alpha
	buffer1D[4]=(char)0x00;	buffer1D[5]=(char)0x00; buffer1D[6]=(char)0x00; buffer1D[7]=(char)0x00;
	glGenTextures	(1, &shadowMap1DTexID);
	glBindTexture	(GL_TEXTURE_1D, shadowMap1DTexID);
	glTexImage1D	(GL_TEXTURE_1D, 0, GL_RGBA, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer1D);
	glTexParameteri	(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER,		GL_NEAREST);
	glTexParameteri	(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER,		GL_NEAREST);
	glTexParameteri	(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S,			GL_CLAMP_TO_EDGE);
	delete [] buffer1D;

then i used multitexturing, and in a second pass (after the standard shadow map pass), i would check to see if the point was in the back projected region of the shadowmap:

//--------------------------------------------------------------
// SET UP THE SECOND TEXTURE FOR MULTITEXTURING
//--------------------------------------------------------------
glActiveTextureARB	(GL_TEXTURE1_ARB);
glEnable			(GL_TEXTURE_1D);
glBindTexture		(GL_TEXTURE_1D, shadowMap1DTexID);

glTexGeni			(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
glTexGendv			(GL_S, GL_EYE_PLANE, vec3);
glEnable			(GL_TEXTURE_GEN_S);

glTexEnvi			(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,	GL_COMBINE_ARB);
glTexEnvi			(GL_TEXTURE_ENV, GL_COMBINE_ALPHA_ARB,	GL_ADD);
glTexEnvi			(GL_TEXTURE_ENV, GL_COMBINE_RGB_ARB,	GL_ADD);
//--------------------------------------------------------------

if it is behind the light, then it adds 1.0 to the alpha (to guarantee it’s completely visible for the final rendering pass). if the point is in front of the light, then nothing’s added to it (the black pixel in the 1D texture), so that the results of the shadowmap comparison are unchanged. btw, “vec3” is the z-vector which defines the light’s orthonormal basis (what would be the “r plane” when performing the shadow mapping). so i guess that’s it