Some experiments with gl_FragDepth

i have a simple shader with a color texture u_Texture0 and a depth texture rectangle u_DepthTexture.

first, i tried this:

gl_FragColor =  texture2D( u_Texture0, Coordinates );

which resulted in:

ok, fine, just what i expected.

then i tried:

gl_FragColor =  texture2D( u_Texture0, Coordinates );
gl_FragDepth = gl_FragCoord.z;	

which should have had the same result, right? instead i got:

wtf? ok, then i tried to read depth from my depth texture rectangle:

gl_FragColor =  texture2D( u_Texture0, Coordinates );
gl_FragDepth = texture2DRect( u_DepthTexture, gl_FragCoord.xy ).r;

which resulted in:

my depth texture is fine, i assigned it to gl_FragColor and the output was the same as the depth texture. so what’s wrong? btw my graphics board is a radeon 9800 pro -.-

Your description is a bit unclear. Is this what you’re trying to do:

  1. Render 3D scene
  2. copy color & depth to textures
  3. put scene to screen
  4. overlay some geometry on top of it
    ?

Please describe what you currently do. Just single line from a shader is not enough if you don’t know where this shader is used.

Your problems probably come from comparing 24-bit depth buffer with 16-bit depth texture, depth-texture filtering or shaders that are not position invariant. Could be something else. At this point I cannot tell you more.

  1. depth and ambience pass is rendered
  2. depth is copied to a texture rectangle
  3. lighting passes are rendered with a simple shader. the only interesting lines are the ones i posted above.

the depth texture rectangle is to have 24bit depth, at least i create it like this:

glTexImage2D( GL_TEXTURE_RECTANGLE_ARB, 0, GL_DEPTH_COMPONENT24, m_kSize.x, m_kSize.y, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0 );

filtering is set to GL_NEAREST for both min & max.

thanks :slight_smile:

the only interesting lines are the ones i posted above
Not really - your problem is probably in the vertex shader - do you use ftransform() ?

And does your second pass render any geometry or is it just a fullscreen quad? Do you really need depth test in second pass?

If you need more information about z-buffer storage, then have a look at this page:
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html

Originally posted by Vexator:
glTexImage2D( GL_TEXTURE_RECTANGLE_ARB, 0, GL_DEPTH_COMPONENT24, m_kSize.x, m_kSize.y, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0 );
You will need to use GL_INTEGER, GL_UNSIGNED_BYTE will convert the depth values back to 8bit. I think.

You will need to use GL_INTEGER, GL_UNSIGNED_BYTE will convert the depth values back to 8bit. I think.
No, last 3 parameters only specify what kind of data you pass from CPU. And since last parameter is NULL texture image is not initialized. So actually GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE don’t matter at all.

Not really - your problem is probably in the vertex shader - do you use ftransform() ?
ja.

And does your second pass render any geometry or is it just a fullscreen quad? Do you really need depth test in second pass?
i render the same geometry as in the initial pass. depth mask is disabled for the lighting passes.

You will need to use GL_INTEGER, GL_UNSIGNED_BYTE will convert the depth values back to 8bit.
ok, changed it. didn’t help, though…

The specification states that:
[ul][li]All shaders that either conditionally or unconditionally copy the input gl_FragCoord.z to the output gl_FragDepth are depth-invariant with respect to each other, for those fragments where this copy actually is done.[*]Fragment shader that does not write to gl_FragDepth is depth-invariant with fixed function.[/ul][/li]
The specification does not guarantee depth-invariance if one shader writes to gl_FragDepth and the second does not.

mmh… i only use two shaders, and the one i’m not writing to gl_depth is the one which is active when the depth buffer is written to. so is there a way for me to solve this?

Originally posted by Vexator:
mmh… i only use two shaders, and the one i’m not writing to gl_depth is the one which is active when the depth buffer is written to. so is there a way for me to solve this?
Write the depth in both of them or in none from them.

Why do you need to write the depth? Writing the depth from shader will kill various early depth test optimizations even if the depth buffer writes are disabled.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.