Basic depth buffer confusions

There are a couple of issues with the depth buffer that confuse me:

  1. the depth of a fragment could be the distance between the near plane and the fragment as the fragment is in eye space; or it could be the distance between the camera and the fragment (spatial reference irrelevant). Are these 2 the difference between normalized device coordinates and linear coordinates?

  2. if a depth value is stored using gl_FragData[0] in a fragment shader, then does the write to the depth buffer assure the value will be written only if that value is less than the value currently in the buffer… ie, you can only update the depth buffer with nearer values except on the clear buffer bit call?

the depth of a fragment could be

The input depth or the output depth? The input depth, gl_FragCoord.z, is defined quite clearly by the OpenGL specification: it is the window-space Z coordinate. The transform from the vertex shader clip-space output to window-space is well defined in the specification. A complete breakdown of the math involved can be found here.

The output depth will be the input depth if you don’t write anything to gl_FragDepth. If you do, then it will be exactly and only what you choose to write.

if a depth value is stored using gl_FragData[0] in a fragment shader

What is a “depth value”? There are only values; they do not intrinsically have concepts like “depth” or “color”. A value only has meaning in the context of how it is used.

The value you write to gl_FragData is necessarily a “color” value because you’re writing it to a buffer bound to a COLOR_ATTACHMENT. There’s nothing stopping you from writing that same value to gl_FragDepth as well.

Thanks Alfonse. From what I have read you cannot both read and write to a framebuffer on the same pass. So is it not possible to check the depth value in the depth buffer before overwritting it with a call to gl_FragDepth?

That restriction is somewhat relaxed by ARB/EXT_shader_image_load_store and NV_texture_barrier, however I don’t really understand why you ask that as Alfonse did not talk about anything like that.

I didn’t mean to suggest he did. It was a new thread of inquiry as I piece together an understanding of how to work with the depth buffer. I don’t suppose anyone knows of a simple working example with code that limits itself to drawing a depth buffer scene to a texture, then displaying that texture on a screen aligned quad? Specifically, using glsl to handle rbo writes, texture reads; and I believe hlsl can draw the screen aligned quad without requiring attribute data, that too, if glsl does that.

If you disable depth-test or depth-writing, you can bind the depthbuffer texture as a texture to sample from, and read from it while rendering (to the color textures).
This way you can implement custom fog/volumetristic effects. And it’s a legal/valid action.

The safest way is to copy the texture with glBlitFramebuffer, and use that texture, regardless whether you keep depth-writing enabled or not. Of course, this will be giving you old depth-values, but that’ll be usually ok - you’ll have generally rendered all opaque/alpha-tested geometry beforehand anyway.

NV_texture_barrier lets you invalidate texture-caches before a drawcall. Can be used with depthwriting enabled It will give you a correct sampling-value on all pixels only on the first time they are drawn on (in the drawcall). Meaning, if your drawcall produces 2 triangles where the second triangle covers the first, the first triangle’s pixels will be calculated correctly, but the second triangle’s pixels will not - someof them will be using the old texture values, others will be using the new values.

ARB_shader_image_load_store can let you read/write any texture at any time, and get the newest values from it (if you set it up to do so). Can be used with depthwriting, and will always work “correctly”, albeit slower.

The core of the problem you meet is that the ROPs (that do pixel/depth writing, testing and blending) historically used different memory-caches than the texture-samplers. And those caches are not synchronizing with each-other. So, when you update the depth-value via a ROP, there’d be no guarantee that the texture-sampler will know of this update - and it will happily use the old values it copied earlier.

P.S. to create a depth-buffer that you can read as a texture, you need to use FBOs, and the depth-texture internalformat/format are both GL_DEPTH_COMPONENT instead of GL_RGBA8/GL_RGBA.

There are code fragments here for showing the call sequences that work. These aren’t full source code. “Using it to display a fullscreen quad” and such is left in your capable hands.

http://www.opengl.org/wiki/Framebuffer_Object_Examples

and more specifically, Quick example, render_to_texture (2D Depth texture ONLY)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.