NVIDIA: Read depth component with multisample ON

Hi, With an NVidia card I’m not able to read the depth value of a fragment with multisample activated. With a ATI card seems to work.

I use the glReadPixels API with GL_DEPTH_COMPONENT and GL_FLOAT parameters.

With no multisample, of course, works…

Thank you

Are you rendering to a FBO or to a main frame buffer?

With FBOs you have to copy a multi-sample surface to a non-multi-sample surface before you can read from it.

If you are not using FBOs, what card/driver do you have? I don’t recall any problems doing this. (my tests were a long time ago however)

I’m drawing in the main frame buffer and I have a 7900GT card with one of the latest driver. Don’t know now (I’m at work).

I’ve discovered that the problem is present only if I use the extension “GL_NV_multisample_filter_hint”, such as:

glHint(GL_MULTISAMPLE_FILTER_HINT_NV,GL_NICEST);

or

glHint(GL_MULTISAMPLE_FILTER_HINT_NV,GL_FASTEST);

of course the extension “GL_NV_multisample_filter_hint” is supported by my Card (NVidia 7900GT and NVidia 9200)

If I dont use that extension everything is OK!

Thank you

You have an Nvidia 9200? cool! :slight_smile:

Anyway…maybe this helps,

While not quite consistent with the way ARB_multisample is specified, NVIDIA uses the SwapBuffers operation as a trigger for downsampling multisample sample buffers (other operations such as glReadPixels also trigger downsampling).

N.

…of course it is only a Gf 6200 :wink:

Please can you explain more? I’ve no chance to read deep values with “GL_NV_multisample_filter_hint” on? Or there is a trick I can use ?

Thank you

Have a look at the issues section of the GL_NV_multisample_filter_hint spec

I believe the 6200 doesn’t support floating point blending or texture filtering, so maybe your problem is related to that as resolving multisample buffers to single sample buffers is basically a weighted interpolation scheme.

N.

The OP is using the backbuffer and the backbuffer depth buffer is never in a float format. It is either 16 bit or 24 bit integer format.
When you call glReadPixels(…, GL_FLOAT, …)
the driver will download the integer value and convert to float format on the CPU and then it hands it to you.

This is not a texture filtering related!