Reading Depth32F Stencil8 combined buffer in shade

Hi all.

I have set up a frame buffer object for deferred rendering like so:

glBindTexture(GL_TEXTURE_2D, gNormal);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, wWidth, wHeight, 0, GL_RGB, GL_FLOAT, NULL);

glBindTexture(GL_TEXTURE_2D, gAlbedo);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wWidth, wHeight, 0, GL_RGBA, GL_FLOAT, NULL);

glBindTexture(GL_TEXTURE_2D, gMaterial);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wWidth, wHeight, 0, GL_RGBA, GL_FLOAT, NULL);

glBindTexture(GL_TEXTURE_2D, gDepth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, wWidth, wHeight, 0, GL_DEPTH_STENCIL, GL_FLOAT_32_UNSIGNED_INT_24_8_REV, NULL);

glBindTexture(GL_TEXTURE_2D, cRadiance);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wWidth, wHeight, 0, GL_RGBA, GL_FLOAT, NULL);

glBindFramebuffer(GL_DRAW_FRAMEBUFFER, gBuffer);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, gNormal, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, gAlbedo, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, gMaterial, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, gDepth, 0);

Just recently I switched from GL_DEPTH24_STENCIL8 due to inaccuracy issues. In my fragment shader I calculate the eye space position of a surface using the following code:

float x = (texCoords.x * 2.0f) - 1.0f;
float y = (texCoords.y * 2.0f) - 1.0f;
float z = (texture(gDepth, texCoords).s * 2.0) - 1.0f;
vec4 buff = invProj * vec4(x, y, z, 1.0f);
sPos = buff.xyz / buff.w;

Prior to changing from GL_DEPTH24_STENCIL8 this would work perfectly. However, ever since the change to GL_DEPTH32F_STENCIL8, the look up texture(gDepth, texCoords).s spews out garbage.

What should I be doing instead of texture(gDepth, texCoords).s?

Did I specify the wrong type or internal format for the attatched depth texture?

Any help will be greatly appreciated.

What GPU and driver? Sounds like incorrect behavior to me.

And what GL version? I’d assume we’re talking GL 3.0+.

I am working with an ATI Radeon 5850 with Catalyst 10.12 and yes GL 3.3+

I should add that using just GL_DEPTH_COMPONENT32F does not have any problems. It seems that the 32F_STENCIL8 format and that format alone causes trouble.

Since it is an odd format (64 bits in total) is there anything special I have to do to work with it?

You mean 40 right ?

Anyway, nothing special to do, unless a bug …

It turns out I can reproduce the bug using GL_DEPTH24_STENCIL8, the difference was that I was clearing that buffer between passes in the one case and not when using 32F_STENCIL8. Bad scientific method on my part. So anyway I pinpointed the bug to combined depth stencil textures being corrupted after a read which is apparently a common problem on AMD cards as I found people complaining about it on the AMD forums, gamedev and here as well.

Bummer :frowning:

I have access to a quadro fx whatever so tomorrow I will test how things work on nvidia to know for certain and might buy a new card.

UPDATE:

Reconstructing position by reading the depth from a combined depth stencil format as a texture works as expected with nvidia, however the above described problem (corruption) ocurrs with AMD.

I can confirm that this error persists with today’s release of display drivers.

Aside from that congratulations to AMD and their engineers for their new driver release.

Avcol,

would you mind sending us a test case ? we have not been able to reproduce the issue above.

thanks,