deferred spotlight shadow map value issues

hello, i would like some fresh ideas to help solve my issue.

so i have engine with deferred rendering, i already have implemented point lights with dual paraboloid shadows\pcf. they’re working fine.
but I’ve stumbled upon some unexpected issues trying to implement cone spotlights in similar way, but with single projective shadow map.

my depth texture setup is basic:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_BORDER);

        float borderColor[4] = {1.0, 1.0, 1.0, 0.0};
        glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, borderColor);

        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LESS);
        glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);

        glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, shadowResolution, shadowResolution, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureObject, 0);

and same setup works for point lights quite nice.

how i render objects to shadow map:

glColorMask(0, 0, 0, 0);
    glBindFramebuffer(GL_FRAMEBUFFER, lightSource[f].frameBufferObject);
    glClear(GL_DEPTH_BUFFER_BIT);

    glViewport(0, 0, lightSource[f].shadowResolution, lightSource[f].shadowResolution);
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    gluPerspective(97.5f, 1.0f, 1.0, lightSource[f].radius * 2.0f);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();

    glPushMatrix();

    glRotatef(-lightSource[f].rotation.x, 1, 0, 0);
    glRotatef(-lightSource[f].rotation.y, 0, 1, 0);
    glRotatef(-lightSource[f].rotation.z, 0, 0, 1);

    glTranslatef(-lightSource[f].position.x, -lightSource[f].position.y, -lightSource[f].position.z);

    lightSource[f].getMatrix();

    for(unsigned i = 0; i < lightSource[f].affectedObjects.size(); i++)
         Objects[lightSource[f].affectedObjects[i]].renderSpotlight(lightSource[f].Radius * 2.0);

    glPopMatrix();

and the shader is, basically ftransform();
all the geometry is in the right place. issue is covered in depth value i get from generated shadow map.

so i try to use it like that, while rendering light cone:

vec3 position = modelMatrix * vec4(texture2D(positionSampler, gl_FragCoord.xy).xyz, 1.0);
float attenuation = distance(lightPos, position)/lightRadius;

vec3 coord = (lightMatrix * backPosition).xyz;
float len = length(coord.xyz);
coord /= len;
coord.xy = coord.xy * 0.5 + 0.5;

shadow = shadow2DProj(shadowMap, vec4(coord.x, coord.y, attenuation, 1.0)).x;

and see no shadow map effect. until i set attenuation to something about 0.99-1.0; obviously, it won’t produce correct depth comparison. but coord.xy values are correct. i guess it has something to do with perspective… tried differend manipulations with attenuation value\W component - no acceptable result. if i manually output linearized depth values from shadowmap - they are correct.

mmkay… as i’ve suspected, the problem is depth gathered by shadow2DProj for comparison is non-linear.

so i try it like that:
float scaled_attenuation = (lightRadius/(lightRadius - znear) + (lightRadius/(znear - lightRadius))/(attenuation * lightRadius)); //lightRadius == zfar

it brings me closer, but result is still very unaccurate. it seems like it suffers from heavy perspective distortion.

Will try, but what I see here is confusing – I’ll try to highlight what’s causing me that confusion.

First, a total side-issue, but why are you using a 16-bit depth buffer. DEPTH_COMPONENT24 is more standard.

…unexpected issues trying to implement cone spotlights in similar way, but with single projective shadow map. … so i try to use it like that, while rendering light cone:

vec3 position = modelMatrix * vec4(texture2D(positionSampler, gl_FragCoord.xy).xyz, 1.0);
float attenuation = distance(lightPos, position)/lightRadius;

vec3 coord = (lightMatrix * backPosition).xyz;
float len = length(coord.xyz);
coord /= len;
coord.xy = coord.xy * 0.5 + 0.5;

shadow = shadow2DProj(shadowMap, vec4(coord.x, coord.y, attenuation, 1.0)).x;

and see no shadow map effect. until i set attenuation to something about 0.99-1.0; obviously, it won’t produce correct depth comparison. but coord.xy values are correct. i guess it has something to do with perspective…

Your transform math and your comment about it don’t make much sense to me. What puzzles me is that you would have already beaten the bugs out of your transform math in doing a standard omni point light source – which you said works fine – and adding a cone is just a small extension to that.

I’ll explain my puzzlement in a minute, but I would have expected you to be reading the camera WINDOW-SPACE depth value from the G-buffer, back-projecting that with the fragment’s WINDOW-SPACE XY position to get a camera’s EYE-SPACE position for the fragment, then backprojecting that through WORLD-SPACE to the light’s EYE-SPACE, and onto the light’s CLIP-SPACE. Then you apply your *0.5+0.5 bias to X,Y, &Z. And then you do your w-divide, shadow map lookup, and depth comparison (which is what shadow2DProj does). I’m not seeing that here. Graphically, that’s this (courtesy of Paul’s Projects):

Some of my puzzlement is the following. If modelMatrix is what it sounds like (a camera MODELING transform), the first line makes no sense. As to the second, even if we assume that attenuation gets you a distance from the fragment to the light source, linearly scaled to 0…1 within 0…lightRadius. This is a linear, radial value, and bears little resemblance to the biased light’s clip-space depth value that you should be using for the shadow map depth comparison. And of course the w-value for the lookup position wouldn’t ever be 1 for a perspective projection. There’s no clue here what backPosition is, so can’t really trace the coord.xy logic.

So there’s some resemblances to shadow map logic here, but not enough to convince me this is right.

ok, it’s my fault. i made you confused because i forgot to rename some variables. here’s corrected fragment program part for shadows:

float zfar = lightRadius; //just an alias to read more easily for you 
float znear = 1.0; 

vec3 position = vec4(texture2D(positionSampler, gl_FragCoord.xy).xyz, 1.0); //gbuffer fragment position in camera's eye space
position = cameraViewInverse * position; //world space
float distance_to_light = distance(lightPos, position); 

vec3 coord = (lightProjectionMatrix * lightModelViewMatrix * position).xyz; 
float len = length(coord.xyz); 
coord /= len;
 coord.xy = coord.xy * 0.5 + 0.5; 

float scaled_distance = zfar/(zfar - znear) + zfar/(znear - zfar)/distance_to_light; 
shadow = shadow2DProj(shadowMap, vec4(coord.x, coord.y, scaled_distance, 1.0)).x;

hope this will clear up everything. i will review my matrix math now. and yes, i don’t get what i should use as W component for shadow lookup. for dual paraboloid it was a lot simplier, because all the significant transformations were manual and happened in the same place. and here i get a bit confused in all the spaces.

Ok, thanks. That helps.

Again, first what I’m expecting to see. Then what confuses me.

What I’m expecting to see is something like this:


vec4 camera_eye_pos = ...;  // NOTE: .w == 1 here
vec4 light_clip_pos_biased = ( BiasMatrix * LightProjection * LightViewing * CameraViewingInverse ) * camera_eye_pos;
float shadow = shadow2DProj( ShadowMap, light_clip_pos_biased ).x;

which takes us from the camera’s EYE-SPACE to WORLD-SPACE → light’s EYE-SPACE → light’s CLIP-SPACE. And then it stacks on the *0.5+0.5 bias matrix that shifts the position (post perspective divide) from -1…1 range to 0…1 range, in X, Y, and Z. Note that the entire product of matrices in the parenthesis can and probably should be precomputed per-frame and uploaded to the shader in a single matrix uniform. Then you only do one matrix transform and not 4.

So let’s look at your code one line at a time:

vec3 position = vec4(texture2D(positionSampler, gl_FragCoord.xy).xyz, 1.0); //gbuffer fragment position in camera's eye space
position = cameraViewInverse * position; //world space

I get that you’re trying to get the WORLD-SPACE position here. However, this shouldn’t compile because in the first line you’re assigning a vec4 to a vec3. I’ll move on assuming that’s just a typo. With that fix, it should give you a WORLD-SPACE position “assuming” the input is truly a camera EYE-SPACE position.

However, note that you generally shouldn’t use world-space positions on the GPU because these could have large magnitude – but you can leave that as a nuance for later. If your world-space positions are tiny, should work fine.

vec3 coord = (lightProjectionMatrix * lightModelViewMatrix * position).xyz;

Houston, we have a problem. Since your light source is a point light source, your light projection matrix is perspective. Perspective projections make use of the .w coordinate (that is, after applying it, .w is typically not 1). You can’t just thunk down to vec3 here. You need to keep this as vec4.

float len = length(coord.xyz); 
coord /= len;

Why are you doing this? I think I know what you’re “trying” to do. That is, you’re trying to fit things down into some unit box. But this doesn’t do it. What you might not appreciate is that’s what the projection matrix does for you. It squeezes things down such that (post-perpective-divide, which comes later), the entire view frustum fits neatly into a (-1…1, -1…1, -1…1) cube. This post-perspective-divide “cube” space is called NDC. Read about it in the Viewing chapter of the OpenGL Programming Guide for instance.

coord.xy = coord.xy * 0.5 + 0.5; 

Yeah, you’re gonna need a bias as the last step before the texture lookup, but this only biases X and Y. You need to bias Z too. Recall I said that NDC is (-1…1, -1…1, -1…1) (that is, in X, Y, and Z). And what you want in the end is to get to a space “like” NDC, but which has the extents (0…1, 0…1, 0…1). So you just scale and shift NDC to fit.


float distance_to_light = distance(lightPos, position); 
...
float scaled_distance = zfar/(zfar - znear) + zfar/(znear - zfar)/distance_to_light;

I don’t really have a clue on what this is doing. Where did this come from? Note that the first line computes a “radial” distance, but a standard depth buffer (like a shadow map) encodes a distance along the EYE-SPACE Z-axis. This is not a radial distance.

ok, i’ve done it. real problem for me was the fact that i lost W component at some point, replacing it with 1.0 and trying to treat it like linear point light. but it was, not surprisingly, identical to basic shadow mapping. it should go like this:

vec4 position= cameraViewInverse * vec4(texture2D(positionSampler, gl_FragCoord.xy).xyz, 1.0);
vec4 coord = lightMatrix * position; // lightMatrix == bias * lightModelViewProjection
shadow = shadow2DProj(shadowMap, coord).x;

and that’s it.

vec3 thing was a typo. before showing you the code, i edited it a lot, because until i finalize the feature, i make a mess in a code. i think it’s common. and i tried to make a readable, isolated example.

this was inherited from my attemts to treat it like point light:

float scaled_distance = zfar/(zfar - znear) + zfar/(znear - zfar)/distance_to_light;

i will concentrate all the transformations to light-space into single matrix. what you’ve seen is dirty code. first finalize, then optimize

can you explain

However, note that you generally shouldn’t use world-space positions on the GPU because these could have large magnitude – but you can leave that as a nuance for later. If your world-space positions are tiny, should work fine.
that part? in what space i should make my computations? what is the better way to store? you mentioned reconstruction of position from depth… but isn’t it expensive? i use position a lot. i have deferred global light with specular, local point lights with specular\shadows, soft edged water, now spotlights with shadows. i think i gonna loose a lot of performance with position reconstruction routines. for tests, i have about 4000 units scene and didn’t notice significant problems with lighting. global lighting is done in world-space.

also i’m facing a problem doing projective texturing for this light. strangely, when i use same coordinates i’ve generated for shadow map with texture2DProj - they don’t work. texture is offseting when light is being rotated.

[QUOTE=S1ngular;1246626]can you explain that part?

[/QUOTE]

Fundamentally, you’re not doing anything different (same source and destination spaces in your transformation chain). The only thing you do is optimize one thing: premultiply (lightMatrix * cameraViewInverse) on the CPU and call that lightMatrix. Then there’s no need for your shader to deal with WORLD-SPACE positions. If your world is tiny though, you don’t care about this.

you mentioned reconstruction of position from depth… but isn’t it expensive?

It’s not too bad. We’re talking a few compute cycles vs. more memory (6-12 more bytes per sample in the G-buffer for storing eye-space X,Y,Z rather than just using the depth buffer you’re writing anyway). Depends on your specific code and GPU, but compute usually wins the race. Once you get your shadows working, time it both ways. Or you can just stay with storing eye-space X, Y, and Z in your G-Buffer.

also i’m facing a problem doing projective texturing for this light. strangely, when i use same coordinates i’ve generated for shadow map with texture2DProj - they don’t work. texture is offseting when light is being rotated.

Dunno. I don’t have enough info to take a guess – but I’m sure you’ll figure it out!

that is disappointing. i thought you will point out some obvious mistake i made. i can’t find any differences in math between shadow mapping and projective mapping. i used exactly the same texture coordinates i’ve used for working shadow mapping in the same shader and it doesn’t position projected texture properly if i rotate light, example:

light fov - 60:
http://img845.imageshack.us/img845/1509/projecty.jpg

light fov - 75:
http://img14.imageshack.us/img14/4697/project2c.jpg

if light is pointing to initial direction - it is centered properly. i’m not so sure about scaling.


ok, the problem was my transformation order for rendering light-cone. i should’ve considered it’s initial orientation(Z-). so rotation about Y goes first.