NVidia FXAA and multisampling

In my application I have an OpenGL viewport with a multisampled pixelformat and I need to do a hidden draw in false colors of my scene without multisampling.

Before doing the hidden draw I turn off OpenGL multisample:

glDisable(gl_MULTISAMPLE_ARB);

Then I do the draw and read the BackBuffer with a glReadPixels().

Now, if the"Antialiasing FXAA" is turned OFF in the Nvidia control panel, the image that I get IS multisampled.
If it’s turner ON the image that I get is properly not multisampled.

What’s going on?

I have a Quadro 600 and updated to the latest driver.

Brings back something I recall from years ago.

Your description reminded me of this. Don’t know if it still works like this or not.

[QUOTE=Dark Photon;1264977]Brings back something I recall from years ago.

Your description reminded me of this. Don’t know if it still works like this or not.[/QUOTE]

Yes, that looks like my problem.
Is there some way around it?

[QUOTE=Devdept2;1265044]Yes, that looks like my problem.
Is there some way around it?[/QUOTE]
Not sure. I didn’t ever run this to ground because it wasn’t really important to me.

I think the crucial question is: what is the driver doing? For the pixels that you are rendering with multisample rasterization disabled, it would be interesting to see if all the subsample values are identical (as you’d expect) – that is, it’s not using coverage to determine which subsamples to set.

If so, then the issue may be in the downsample, which I suspected. There could be taps into sample for adjacent pixels for instance (ala old quincunx). You could roll your own downsample that just choses one of the subsample values as the texel/pixel value (via texelFetch). But since sounds like you have a mix of 1x and multisample data on your screen, your downsample might have to be a bit smarter than this. Or you might be able to solve it by just pulling samples (taps) from within each pixel and not from adjacent pixels…

[QUOTE=Dark Photon;1265049]Not sure. I didn’t ever run this to ground because it wasn’t really important to me.

I think the crucial question is: what is the driver doing? For the pixels that you are rendering with multisample rasterization disabled, it would be interesting to see if all the subsample values are identical (as you’d expect) – that is, it’s not using coverage to determine which subsamples to set.

If so, then the issue may be in the downsample, which I suspected. There could be taps into sample for adjacent pixels for instance (ala old quincunx). You could roll your own downsample that just choses one of the subsample values as the texel/pixel value (via texelFetch). But since sounds like you have a mix of 1x and multisample data on your screen, your downsample might have to be a bit smarter than this. Or you might be able to solve it by just pulling samples (taps) from within each pixel and not from adjacent pixels…[/QUOTE]

I’m trying to do a simple selection with a hidden drawing in false colors of my entities, but of course if they are multisampled I get wrong values on the borders of the entities.
What you propose seems too much complicated for my needs… parhaps I should render directly to a texture using a non multisampled FBO instead of drawing on the backbuffer?

parhaps I should render directly to a texture using a non multisampled FBO instead of drawing on the backbuffer?

That’d probably be best. It would also allow you to render to an R32UI format texture, so that you’re not rendering “colors” but actual identifiers.

Thanks a lot