FBO and 24bit depth buffer support on ATI?

I just installed the 5.10 drivers on my x700 Mobility card and still no 24bit depth support. Is/will this ever see the light of day on ATI for FBO’s?

On my 6800GT when I setup FBOs I can use this

    glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, texWidth, texHeight);

and with ATI I have to use 16 or GL_DEPTH_COMPONENT and my depthmap does look better with the 24 on nvidia…

There is only 16-bit depth buffer in FBO on ATI, because, as far I know, it’s emulated.

If a 24-bit depth buffer was available, the emulation would be too expensive.

For a regular depth buffer you can have 24bits. For a depth texture (for instance for shadowmapping) you can only have 16bit textures. This is a hardware limitation. The X1600 supports 24bit depth textures though.

Originally posted by Humus:
For a regular depth buffer you can have 24bits. For a depth texture (for instance for shadowmapping) you can only have 16bit textures. This is a hardware limitation. The X1600 supports 24bit depth textures though.
Ok, I am using this depthmap for my water reflections not shadowing but same difference… So what about x1800XT? does that support 24bit? thanks

No. The X1600 is a newer revision of the chip and has a couple of features that the X1800 doesn’t have, 24bit depth texture fetch and fetch4.

The best solution is try to create a 24-bit depth buffer, and if it fails, try the 16-bit depth component, hoping that the FBO won’t be emulated by the ATI OpenGL driver (it shouldn’t since there is an ‘unsupported’ error flags.).

If Humus can confirm that FBO would never create emulated rendering contexts on ATI, that would be a good thing.

We only support formats that are actually supported by the hardware.

What error should I find if glRenderbufferStorageEXT fails? I know the function is void, but How else would one know whether or not so I can move from 24bit to 16bit? I look through the extension listing and can’t find anything. Thanks

i assume this is for shadowmaps?
in that case why do u need 24bit depth?

the only case i can see a real nesessity is point lights covering a huge area.

Originally posted by Mars_9999:
What error should I find if glRenderbufferStorageEXT fails? I know the function is void, but How else would one know whether or not so I can move from 24bit to 16bit? I look through the extension listing and can’t find anything. Thanks
glCheckFramebufferStatusEXT() is what you’re looking for.

Originally posted by zed:
[b]i assume this is for shadowmaps?
in that case why do u need 24bit depth?

the only case i can see a real nesessity is point lights covering a huge area.[/b]
I am using it for rendering water. And there is a noticable difference between 16/24 on my 6800GT. Thanks Humus I will look into it.

Depending on specifically what you’re doing, you may be able to just render depth to a fp32 texture and read from that. It may be slightly more expensive than just reading from the depth buffer, but probably not too much (we haven’t had any performance problems doing shadow mapping with floating point textures for example).

Of course your hardware would have to support fp32 textures, but I think everything from Radeon 9500 up does (please correct me if I’m wrong).

Hmmm I tried 16 vs. 24 on the 6800GT and to my suprize the depth map was no different? So I am assuming that Nvidia isn’t doing 24bit either but in the driver will fall back to 16 if you select 24??