problem with depth texture

Hi,

my goal is to draw some cubes into a framebuffer object, draw the result back into the default framebuffer and finally draw something more (a single quad) into the default framebuffer that correctly overlaps with the stuff drawn before. In order to achieve this I need to re-use the z-data from the framebuffer object. I have a depth-texture attached to the framebuffer object and when drawing back to the default framebuffer, I’m using a shader to write the z-data.

The following screenshot shows the result:

As you can see, the overlapping doesn’t work properly.

I have written a minimal application which reproduces the problem, more on that later. Yesterday I’ve been playing around with the code and noticed that if I make the range between the near clipping plane and the far clipping plane smaller, the ‘effect’ shown above becomes smaller. The following screenshot shows the same scene with the near clipping plane moved from 0.1f to 1.f and the the far clipping plane from 100.f to 50.f:

So I believe the depth values are somehow quantized far too much. But why? It cannot be the precision of my depth texture which is 24 bit since if I change it to 32 bit the ‘effect’ stays exactly the same. If it would be caused by a too low precision of the depth texture, I would have expected the ‘effect’ to become less.

I can think of two possible reasons though I’m not able to explain either of them. So either the projection matrix is getting messed up somehow so that the quotient of the clipping planes becomes superdimensional. Or the quantization is caused by the “gl_FragDepth = texture2D( depthtex, gl_TexCoord[0].xy ).a” instruction in the fragment shader.

Vertex shader:

void main()
{
    gl_Position = ftransform();
    gl_TexCoord[0] = gl_MultiTexCoord0;
}

Fragment shader:

#version 120

uniform sampler2D texture0;
uniform sampler2D depthtex;

void main()
{
    gl_FragColor = vec4( texture2D( texture0, gl_TexCoord[0].xy ).xyz, 1.0 );
    gl_FragDepth = texture2D( depthtex, gl_TexCoord[0].xy ).a;
}

The minimal application can be found here:
http://pastebin.com/UctqYW33

If you have SFML2, you can use the following wrapper code for running the application:
http://pastebin.com/yJgvQpnK

Otherwise use the library of your choice for creating the GL context, like freeglut or whatever.

I would be very happy if someone could take a look on it.
I have no clue what I am doing wrong or just why it doesn’t work.

Thanks in advance!

First, how are you specifying the depth precision on the texture? I know FBOs have funky ways to share the depth and stencil. Also, a lot of machines have 8-bit depth buffers for the default FBO, so make sure you’re not cranking up the precision on your FBO while the default depth buffer is the problem.

Out of curiousity, why can you use the FBO for the rendering? Both the texture and framebuffer are the same resolution, correct?

That’s the initialization code of the depth texture:

glBindTexture( GL_TEXTURE_2D, fbo_depth_id );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP );
glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_ALPHA );

glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24,
              screen_width, screen_height, 0, GL_DEPTH_COMPONENT,
              GL_UNSIGNED_BYTE, NULL );

So the depth precision of the texture is specified by GL_DEPTH_COMPONENT24.

I’ve put these lines of code right at the beginning of my scene drawing routine:

GLint depthbits;
glGetIntegerV( GL_DEPTH_BITS, &depthbits );
std::cout << "depthbits: " << depthbits << std::endl;

It tells me the assumed value ‘24’. Furthermore, everything looks fine, if I render the cubes into the default framebuffer instead of the FBO. So I believe the default depth buffer is set up correctly.

I don’t understand - what do you mean by ‘why’?

Yes, everything is 800x600.

Besides PROJECTION matrix, ensure MODELVIEW matrix is also the same, along with glDepthRange, glViewport. Also ensure both FBO and system framebuffer using the same sampling (start with single sampling).

I would try this: render the exact same scene in both the FBO and system framebuffer at what you “think” is the same depth precision (DEPTH_COMPONENT24 most likely). Then overlay them to see if there are any differences and where they are. Many ways to do that, including render the FBO on top of the system FB as full-screen quad with BLEND and DEPTH_TEST EQUAL and write FragDepth. Alternatively, you could also copy the system FB off to a higher resolution texture format (32F?) but there have to be concerned what this conversion might be doing to you. May be safer to force a 32F conversion of both so you should at least end up in the same quantized value of you started on the same. Nice thing about the latter is you could feed both in as input textures to a final pass and actually color the differences based on the sign and magnitude of the differences. Alternatively, readback to CPU and look at the values, but again there’s a conversion to be concerned about and it’s more like drinking from a firehose. :stuck_out_tongue:

Hi Dark Photon, I’m not sure, what would be the aim of doing so?
I perfectly know where my scene looks different when I render it to FBO instead of default framebuffer.

Just to help you get a more solid line on the problem. You’re puzzled, no?

One question: what is the hardware?

One desperate idea: render to the FBO as you are now, and make another FBO and render what you are rendering to the second FBO. The results should be as the same to the screen, bugged.
However, post a screen shot of the 2nd FBO next to the original FBO, if the cubes don’t perfectly line up between the two, then you know the projection matrices don’t match or the texture co-ordinates are not right. Other odd, and most likely very dumb question: which do you render to screen first: FBO contents or the grey plane? this should not make a difference though…

Just out of curiosity, are you using fixed function pipeline or shader when rendering the grey plane? It still would not explain the jagginess though. What is really, really odd is how the cube lines are not jagged but the depth business is… really weird. My only hunch is that you projection matrices are different between the render to FBO and render to screen, or worse: driver bug [but that does not seem so likely].

One more idea:


glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_ALPHA );

is not part of GL3 core… though it is there in a compatible context. Try taking that call out and use the .r component of the depth texture in your shader… this should NOT make a difference, but you never know.

Thanks man, that was the solution! Don’t really understand it but it works now. Thanks very very much :smiley: been stuck on it for 2,5 weeks or so.

Glad to help, but for the record can you post what hardware, OS and driver version?

Yeah, IIRC from the spec, in GLSL 1.3+ DEPTH_TEXTURE_MODE is ignored and GLSL behaves as if it is always set to LUMINANCE (rrr1).

In older versions, ALPHA mode gives you 000r.

r of course being your 0…1 depth comparison result.

GeForce 6600 GT
NVIDIA Driver Version 195.36.24
Ubuntu Linux 10.04 Lucid Lynx 32bit