How do I access the depth buffer in a frag shader?

Background is that I want to do kind of a “soft blending” effect for light coronas, avoiding sharp edges where they hit the geometry by blending them with pixels in front of them, using the distance of a corona pixel to the pixel hiding it as an color scale (additive blending intended).

This stuff here doesn’t work, but I cannot tell why.


glGenTextures (1, &hDepthBuffer);
glBindTexture (GL_TEXTURE_2D, hDepthBuffer);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_ALPHA);
glTexImage2D (GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, <width>, <height>, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glCopyTexImage2D (GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, <width>, <height>, 0);

Here is the shader, screen res (800x600) and z-near, far (1.0, 6553.5) hard coded. The current back buffer has also been copied to a texture and is passed to the shader in renderTex. Is there a way to directly access the back buffer (draw buffer) in the fragment shader?


uniform sampler2D glareTex;
uniform sampler2D renderTex;
uniform sampler2D depthTex;
vec2 depthScale = vec2 (6553.5 / 6552.5, 6553.5 / -6552.5);
void main (void) 
{
vec2 scrCoord = vec2 (1.0 / 800.0, 1.0 / 600.0) * gl_FragCoord.xy;
vec4 glareColor = texture2D (glareTex, gl_TexCoord [0].xy);
vec4 renderColor = texture2D (renderTex, scrCoord);
float depthZ = depthScale.y / (depthScale.x - texture2D (depthTex, scrCoord).a);
float fragZ = gl_FragCoord.z; /*depthScale.y / (depthScale.x - gl_FragCoord.z);*/
if (fragZ < depthZ) glareColor /= depthZ - fragZ;
gl_FragColor = renderColor + glareColor * gl_Color;
}

Geez, I had thought that if there is one at all then this should be the place where people would know how to do such a thing.

gl_FragCoord.z is Z in window space, just like the value in your depth texture. Thus if you need their difference in eye space you need to transform both of them the same way.

And no, you can’t access the draw buffer in the fragment shader.

I tried (you can see I commented that out) that but it still didn’t work. Any more hints?

What do you get if you just output the value read from the depth texture? What hardware are you using, btw?

ATI X1900 XT, WinXP SP2, Catalyst 7.10.

The depth texture contains almost exclusively values > 0.999f (due to my large z range and everything in my scene being pretty close). Additionally to copying it to a texture, I have copied it to an array accessible in the debugger using glReadPixels(). If I apply the coordinate unscaling to values from that array, the values look alright to me when comparing selected ones with the rendered scene.

I had built some code into the shader making a corona pixel blue if it was behind a geometry pixel, and orange if it was in front. I only got blue pixels although a lot of corona pixels are clearly not occluded.

Do I need to call glFlush() before copying the depth and draw buffers?

That could be a problem. Have you tried reducing the depth range as much as possible?

For further debugging, try linearizing both depth values, normalize them (dividing by far clip distance) and output them as well as their absolute difference to three channels of the color buffer.

Do I need to call glFlush() before copying the depth and draw buffers?

No, you shouldn’t need glFlush().

I don’t know what you mean with linearizing the values. They are linearized in the shader, aren’t they? I changed computation of fragZ as you told me.

The scene has depth values roughly between -1 and -113 (not normalized). I may however have scenes with way greater z-range, at least 1 - 1000, if not more.

I mean, depth testing works for the coronas in the fixed function pipeline with the current z range, so why shouldn’t it in the shader?

It’s incredible how hard something can be that should actually be pretty simple.

Is there any apparent problem in the shader, or do I need to look for flaws somewhere else?

Do I access the depth texture the proper way (texture2D().a)? I declared it as type alpha, but …

What I meant is: draw a fullscreen quad, linearize the depth read from the texture as you already did, normalize it to [0,1] (divide by -far distance) and output it to RGB. That’s the easiest way to check whether the depth texture is ok.

After you’ve checked that, you can render your coronas on top, doing the same as above but this time using gl_FragCoord.z. That way you can instantly see if something’s going wrong.

Well, the values arriving in the depth texture are normalized, aren’t they? I printed them to the corona quads in the shader for a test, and they turned out white. As I said, most > 0.999.

Yes, but you want them in eye space. After linearizing you need to normalize them again to make them displayable, if you want to visually inspect the depth values.

I appreciate your endurance :), but what am I expected to see if rendering that texture? Do you think from looking at a (more or in my case rather less) colorful texture I can see a problem? I don’t think I can.

Does my code look alright so far, particularly the code regarding depth texture properties and copying?

Edit:

It looks like the depth texture arriving in the shader is crap. Why could that be?

I got it!

Found another thread about a similar problem with Google. That thread’s author had finally asked on NVidia’s developer forum, where some NVidia guy told him that the depth texture format must be GL_LUMINANCE and the internal format must be GL_DEPTH_COMPONENT24.

How in all the earth should one know that if one isn’t a driver developer at NVidia’s or ATI’s?

I am not sure if it has anything to do with your question.
Is this what you really mean?

glareColor /= depthZ - fragZ
or
glareColor /=(depthZ - fragZ)

You don’t need brackets as the subtraction is to the right of an assignment operator. The expression means “divide glareColor by the expression right of the assignment operator and store the result in glareColor”.

Btw, the depth texture code works for ATI too, not just for NVidia.