Framebuffer depth test with depth and stencil buffer

I’m attempting to implement deferred shading. I’ve got a number of problems with it, but first and foremost, depth testing is not working right.

In my deferred shading, I create a depth buffer, naturally. Depth testing works fine with this. But if I create a depth and stencil buffer, depth testing just doesn’t work. It’s peculiar.

Buffer creation, depth component only, depth testing works:

glBindTexture(GL_TEXTURE_2D, depth_map);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth_map, 0);

Buffer creation, depth and stencil components, depth testing doesn’t work:

glBindTexture(GL_TEXTURE_2D, depth_map);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_map, 0);

This is literally the only change to my code, and depth testing works for the first one, and fails for the second one. Stencil test is disabled, depth test enabled for both. Any suggestions to fix this problem? I have read several threads regarding similar issues, suggesting this was an AMD driver problem, and indeed I have an AMD card. Hopefully I’m just doing something wrong.

Thanks.

try GL_DEPTH24_STENCIL8.

Thank you, but that didn’t work. Exactly the same result. I also tried with GL_DEPTH_COMPONENT instead of GL_DEPTH_STENCIL, but to no avail.

Any other suggestions?

Thanks

Try:

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, m_width, m_height, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, NULL);

Interesting.

That seems to have worked very well, but now I’m having buffer clearing issues. I’m definitely clearing the buffers for my four framebuffer attachments before I draw, and I’m clearing the default framebuffer as well. There’s also some flickering issues, which I assume could be to do with depth or stencil buffers not being cleared? Also, how on earth can changing the format for the depth and stencil texture make something fail to clear??

Thanks for that suggestion though, that definitely got me on the right track. Are there any problems associated with using integers rather than floats for the depth buffer?

Have you tried using TexStorage instead of TexImage? And ClearBuffer* for clearing?

I don’t want to use TexStorage, because I’m trying to stick with version 3.3. Thanks for the suggestion though.

As for ClearBuffer, I have had some very peculiar results. At all the places that I previously called glClear, I replaced with ClearBuffer for the appropriate buffers. Now my screen fades rapidly to white, so clearly something isn’t clearing correctly. So I can either fade to white or have streaks left behind by geometry as well as flickering. It’s actually more like it’s clearing only where the geometry is actually, because the streaks are the clear color, and everything disappears unless I continue moving the camera. I’m pretty baffled here. Shouldn’t clearBuffer and clear work pretty much the same way? I’m going to outline my code and clears.

bind framebuffer for draw
drawBuffer(final output attachment)
clear color

bind framebuffer for draw
drawBuffers(4 color attachments)
clear color, depth, and stencil
render geometry

drawBuffer(GL_NONE)
clear stencil
stencil pass

drawBuffer(final output attachment)
light pass

bind default framebuffer for draw
clear color
bind framebuffer for read
readBuffer(final output attachment)

glBlit from final output to screen
swapbuffers

I haven’t included state machine changes, but those shouldn’t affect clearing, am I right? Would greatly appreciate anyone taking a look at this, seeing if I’ve got any obvious issues. Thanks

Internally, the depth buffer contains fixed-point values in the range 0…1, so there’s no particular reason to use floats.

However, as well as GL_UNSIGNED_INT_24_8, you can also use GL_FLOAT_32_UNSIGNED_INT_24_8_REV (a 32-bit float containing the depth value followed by a 32-bit unsigned integer containing the stencil value in the 8 least-significant bits).

What you can’t do is use a format of GL_DEPTH_STENCIL with a type which doesn’t allow for the stencil value (§8.4.4):

An INVALID_ENUM error is generated if format is DEPTH_STENCIL and type
is not UNSIGNED_INT_24_8 or FLOAT_32_UNSIGNED_INT_24_8_REV.
.
GL_UNSIGNED_INT_24_8 has the advantage that it can probably be copied directly to video memory, without the need for conversion.

It should also be valid to use an internalformat of GL_DEPTH_STENCIL (or a sized version thereof) with a format of GL_DEPTH_COMPONENT and any scalar type, in which case the stencil component will be undefined (not a problem if you’re just going to clear it).

Solved.

It’s important to set glDepthMask to true BEFORE you attempt to clear the depth buffer.

I hadn’t realized the implication of the order of operations, so I was only enabling the depth mask for my geometry pass after I tried to clear the depth buffer.

Hope this helps anyone with a similar issue. Mods can close the thread I guess?

Edit. Didn’t see GClements comments before I posted. Thanks for the explanation.

Do you use any mask functions (glColorMask(), glDepthMask(), glStencilMask(), etc)? Write masks are honoured by glClear(). §17.4.3:

When Clear is called, the only per-fragment operations that are applied (if enabled) are the pixel ownership test, the scissor test, sRGB conversion (see section 17.3.9), and dithering. The masking operations described in section 17.4.2 are also applied.

EDIT: must learn to type faster :wink:

And that’s why literally the second thing the Wiki says about framebuffer clearing is that write masks are obeyed. :wink: