depth FBO + alpha test on ATI

I noticed this:

nVidia:

ATI:

Works all fine on nVidia hardware, fails on tested ATI hardware. Notice that non-alpha tested geometry casts proper shadows.

(Also, any guess why even basic things like glDepthRange fail on ATI? Skydome is not being rendered. Also, is there a way to disable software gamma correction on ATI?)

Which ATI drivers? Is this with their latest 4.0 drivers? Or 3.3?

Hmmm. I presume you’re rendering shadows to an FBO which only has a depth attachment with ColorMask FALSE. Wonder if it’d magically start working if you add a dummy color attachment and set ColorMask TRUE. It’d be a bug, but just giving some ideas to nail down whether it’s a driver bug or not.

Also what happens if you enable alpha test for your opaque casters? I’d pop in a dummy frag shader that outputs opaque alpha and check for a difference.

(Also, any guess why even basic things like glDepthRange fail on ATI?

No idea… Which depth format are you targetting? If not 24, try 24.

do you have any error being reported by the GL ?

I don’t have direct access to the machine, debugging by proxy so it is a bit difficult. I was hoping this was a known problem, or at least get some ideas what to look out for.

Which ATI drivers? Is this with their latest 4.0 drivers? Or 3.3

3.3 iirc. I have reports of this happening on at least two different ATI hardware.

Hmmm. I presume you’re rendering shadows to an FBO which only has a depth attachment with ColorMask FALSE. Wonder if it’d magically start working if you add a dummy color attachment and set ColorMask TRUE. It’d be a bug, but just giving some ideas to nail down whether it’s a driver bug or not. Also what happens if you enable alpha test for your opaque casters?

I’d pop in a dummy frag shader that outputs opaque alpha and check for a difference.

Sounds plausible, good ideas. I will try these.

Which depth format are you targetting? If not 24, try 24.

16, though it is configurable by the user, I’d have to double-check. But I would assume that additional precision (24/32) should not cause far/nearplane clipping in cases where it doesn’t with 16-bit z-buffer precision?

@Pierre
There are no errors reported afaik, and none with FBO creation for sure.

Some other points I would prefer to have some clarity on:

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);

  1. The docs do not explicit list GL_DEPTH_COMPONENT for the “format” argument: http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml . This is an omission in the documentation?

  2. What should the “type” parameter be for glTexImage2D? This varies wildly from sample to sample, ATI/AMD whitepapers use “GL_UNSIGNED_BYTE”, while others GL_UNSIGNED_SHORT, GL_UNSIGNED_INT, to GL_FLOAT. It seems this argument is completely ignored by most implementations?

The docs do not explicit list GL_DEPTH_COMPONENT for the “format” argument: http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml . This is an omission in the documentation?

You’re programing for GL 3.3; those documents are for GL 2.1.

It seems this argument is completely ignored by most implementations?

The last 3 parameters are for uploading data to the texture. Since you passed a NULL pointer, that means you don’t want to upload data to the texture, so the last 3 parameters are irrelevant.

You’re programing for GL 3.3; those documents are for GL 2.1.

Whoops, no I am coding for 2.1. I confused the “3.3” mentioned with driver version (I’m totally out of the loop as far as ATI/AMDs driver version numbering concerned). But this happens with “3.2.9704 Compatibility Profile Context” anyway.

The last 3 parameters are for uploading data to the texture. Since you passed a NULL pointer, that means you don’t want to upload data to the texture, so the last 3 parameters are irrelevant.

Alright, I suspected that.

By the way, using 16-bit or 24-bit depth accuracy for shadow buffer did not seem to make a difference, alpha-tested geometry still fails to render.

I will try using ‘discard’ in the shader, to bypass the GL_ALPHA_TEST state altogether.

About the glDepthRange problem: If I stick to regular double-buffered (no FBO) the sky renders ok. Issue only appears when rendering the scene to FBO (32-bit RGBA + 16-bit depth). I tried 24-bit depth but FBO/RBO creation would fail silently (renders black), so I couldn’t see if that would work.