Floating point depth buffer questions

Hi folks!

I am relatively new to GLSL and i want to do some GPGPU computing using shaders, but i’m having some problems/questions regarding depth values/lookup… i hope you can help me with it…

I am using FBO and have attachted 32bit floating point color and depth buffers:

//Color attachment
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, Width, Height, 0, GL_RGBA, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT+i, GL_TEXTURE_2D, ColorAttachmentTextures[i], 0);

and

//Depth attachment
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, Width, Height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, DepthBuffer,0);

both using GL_CLAMP wrapping and GL_NEAREST filtering (min and mag).

I want to do a two pass rendering where in the first pass the depth buffer is filled with the depth values as usual and in the second pass i’d like to access these values via a sampler2Dshadow. For all tests the vertex shader only does gl_Position = ftransform();

For looking up the depth value at the current fragment position i do:

gl_FragColor = shadow2D(depthBuffer, vec3(gl_FragCoord.x/xSize,gl_FragCoord.y/ySize , 1.0));

where xSize/ySize are the number of pixels in my buffer in x-/y-direction.

QUESTION 1: What does the 3rd coordinate mean? It does not seem to have an influence on the value(s) returned by shadow2D and i don’t understand the specs.

QUESTION 2: Is the x,y position calculation gl_FragCoord.x/xSize,gl_FragCoord.y/ySize correct?

QUESTION 3: When reading back the framebuffer and depthbuffer using glReadpixels i get different results: the values in the depthbuffer are all slightly larger compared to the values in the color-framebuffer (which should have been filled with the depth-buffer values in the fragment shader). The difference always is 5.96046447753906e-008. Is this due to distinct 32-bit-floating point implementations for GL_RGBA_32F and GL_DEPTH_COMPONENT32 ? Shouldn’t both be IEEE 754?

QUESTION 4: … and a lookup-and-store-in-colorbuffer should not alter the value in any way?!

I’d greatly appreciate any help …

  1. the 3rd component is the reference value for shadow comparison. It is compared with the texel value being fetched to produce the in/out shadow result. If you actually want the depth value and not a shadow comparison, use texture2D instead of shadow2D. Note, the TEXTURE_COMPARE_MODE TexParameter must match your usage pattern to get defined results.

  2. It’s more typical to interpolate a texture coordinate passed from the vertex shader, i.e. covering [0…1] across the quad.

  3. DEPTH_COMPONENT32 is not a floating point format (there’s no “F” in the name.) If you query the actual format you got with glGetTexLevelParameter(…GL_TEXTURE_INTERNAL_FORMAT…), most likely you’ll see it is actually DEPTH_COMPONENT_24. That would explain the delta you’re seeing. If you really want a float depth buffer you need to use either DEPTH_COMPONENT32F (from ARB_depth_buffer_float) or DEPTH_COMPONENT32F_NV (from NV_depth_buffer_float). Note these are two different enums, with different clamping semantics.

  4. You’re not trying to write to the same buffer you’re sampling, are you? That’s undefined.

First of all thank you very much for answering …

  1. So, as i understand it, as long as GL_TEXTURE_COMPARE_MODE is GL_NONE the depth value “D” is returned in the vec4 as 000D,DDD1 or DDDD according to GL_DEPTH_TEXTURE_MODE and when using GL_TEXTURE_COMPARE_MODE == GL_COMPARE_R_TO_TEXTURE the third component is used as “compare value” and i only get 0 or 1 as “D”, right?

It’s more typical to interpolate a texture coordinate passed from the vertex shader, i.e. covering [0…1] across the quad.
I am not rendering a fullscreen quad if you meant that - my gpgpu algorithm strongly interacts with the normal geometry…

  1. You are absolutely right, there are no more differences between color- and depthbuffer when using a GL_DEPTH_COMPONENT32F. Thanks for pointing that out! :cool:

  2. I am writing to the depth buffer in the first pass and then read from it in the 2nd. As a precaution i write protect it in the 2nd pass. According to the FBO spec (4.4.3) this should work. As a safer workaround - is it possible to detach a 2d depth texture from an FBO depth attachment point without unbinding it? Since i have to render a lot of frames i’d like to prevent unbinding the fbo - or how costly is the fbo-unbind-replace-depth-attachment-reattach-rebindfbo process (or alternatively glCopypixels)?

  1. AFAIK, calls to FramebufferTexture2D or FramebufferTextureLayer must be performed on the active FBO, so there is absolutely no need to unbind it.

Oh, of course … right :o

Yes, but note that you can’t use a shadow sampler or shadow texture access function when depth compare is off (IIRC). Flip to sampler2D and texture2D(). In GLSL 1.3, they stopped putting the type in the texture access function, which helps (so it’s “texture()” in both cases). But you still need to change the sampler type.

and when using GL_TEXTURE_COMPARE_MODE == GL_COMPARE_R_TO_TEXTURE the third component is used as “compare value”

With plain shadow2D() (“texture()” in GLSL 1.3+), yes. And if you’re doing projective texture lookups (shadow2DProj() pre-1.3, or textureProj() post-1.3), then .xyz including your third component “compare” value are divided by .w before the texture lookup or compare.

and i only get 0 or 1 as “D”, right?

Assuming you’ve got NEAREST set on the depth texture (1 depth texture sample/compare per fragment).

If you set LINEAR, then you’ve got 4 texture samples/compares per fragment + a blend of the results, so you could get fractional values too. I don’t know if there’s a specific extension for this or not (still looking), but it’s something NVidia added years back.

  1. [quote]It’s more typical to interpolate a texture coordinate passed from the vertex shader, i.e. covering [0…1] across the quad.
    I am not rendering a fullscreen quad if you meant that - my gpgpu algorithm strongly interacts with the normal geometry…[/QUOTE]
    I think what he’s saying is that the usual use of depth textures (for shadow mapping) is that you back-project positions into light clip space and shift them from -1…1 to 0…1 (i.e. depth buffer space) in the vertex shader, and then let the GPU interpolate those 0…1 shadow map quad positions across the triangle. The fragment shader then uses these coords for depth texture lookup and compare.

Just adding: it’s called hardware PCF (percentage closer filtering). 4 samples are fetched from a shadow sampler, compared with a given depth and bilinearly filtered as a result.

AFAIK, it’s not an extension but just a HW feature (supported at least by Radeon 2400+ on ATI).

AFAIK, it’s not an extension but just a HW feature (supported at least by Radeon 2400+ on ATI). [/QUOTE]
Ok. Found a blurb about it in the OpenGL 3.2 Spec under “Texture Comparison Modes”. Says if you set MIN_FILTER or MAG_FILTER to a type with LINEAR in it, the implementation “can” (optionally) perform multiple depth texture lookups/compares and blend the result. This isn’t new, as it’s mentioned waaay back in the GL 1.4 spec as well.

In the NVidia camp, this has been supported since GeForce 6+, and according to the NVidia GPU Programming Guide (GF6-7 version) it’s free: “if you turn on [LINEAR texture filtering] for the shadow map sampler, the hardware will perform 4 depth comparisons, and bilinearly filter the results for the same cost as one sample”. Probably still true for GF8+, as the latest Guide still says: “…have dedicated special transistors specifically for performing the shadow map depth comparison and percentage-closer filtering operations.”

BTW, two things I just tripped over related to this (I’m reading up on this for something else), it appears that ALPHA, LUMINANCE, and INTENSITY aren’t the only options for TEXTURE_COMPARE_MODE. RED is as well (ARB_texture_rg), resulting in D001.

Also, if you’re using GLSL 1.3+, all of this is moot as DEPTH_TEXTURE_MODE is ignored and it behaves as if DEPTH_TEXTURE_MODE = LUMINANCE (yielding DDD1).

Well, thank you all for the insights :slight_smile:
I don’t know how long it would taken me to figure this all out, especially the thing with the linear depth interpolation from four samples, thanks @Dark Photon.

Also, if you’re using GLSL 1.3+, all of this is moot as DEPTH_TEXTURE_MODE is ignored and it behaves as if DEPTH_TEXTURE_MODE = LUMINANCE (yielding DDD1).

… damn, its happening again - i sometimes think things become deprecated faster than i can learn them. :sorrow:
Nevertheless i have to try…

Yeah, I know what you mean. It’s to be expected with hardware shadow maps though. Been in hardware many years longer than shaders, there are bazillions of papers and articles based on older hardware, and shadow mapping has been tweaked, bolted onto, and repackaged so many times in so many ways it can be confusing.
There isn’t just one that pulls it all together for modern hardware unfortunately (GL3.2+GLSL1.5), …but there should be. You can do a lot worse than starting with the shadow map section of the OpenGL Shading Language (3rd ed) book (keeping in mind that’s GLSL3.1+GLSL1.4).

On that note, it blows me away that the red book still doesn’t mention depth texture or hardware depth comparisons…

Oh, it does, but there are errata in the OpenGL 2.0 edition.
Better to read some NVIDIA papers I think…

Yep I did shadow maps on TNT2!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.