4x multisampling AA Works on ATI but not nVidia

I have written a small Windows app that is currently in beta testing. It is being tested by a few people in various locations with different hardware.

Together we have about 20 machines that the software is being tested on.

The main development machine has an ATI Radeon HD 4350 and it works great. It has also been tested on other machines, and it works on them as well. Most are ATI cards, though it does work on systems with Intel motherboard gpus.

But for the beta testers using nvidia cards, they all get a black box instead of any drawing.

Let me explain how the application works which is non-standard.

  1. I use wglChoosePixelFormatARB with a parameter of 4 to try to find 4x multisampling. If that fails i try for 2x, otherwise i default to normal pixel format.

  2. I am enabling 4xAA using glEnable(GL_MULTISAMPLE_ARB) when I init.

  3. I once per second, I use simple gl commands to draw to the back buffer and call glFinish, then copy the backbuffer to a DIB using glReadPixels where I am doing some extra processing to the image using GDI.

  4. The result is finally drawn to windows using UpdateLayeredWindow.

On most machines the result is exactly as I intended, but for my nVidia testers all they get is a black box in the area that was drawn by gl. They can see what I added to the DIB using GDI.

Basically, I am wondering if there is a difference setting up the multismapling pixel format and 4xAA for nvidia display adapters compared to ATI adapters, and if there is, do I have to test some other extentions instead?

If the testers go into the nvidia control panel and disable AA, the drawing appears. When they enable it either as application managed or directly, the drawing is black.

By the way, the actual openGL calls in the app are all opengl 1.1, and I only require 1 frame per second, so I’m not really concerned about speed at this point, mostly worried in what differences there are between ATI vs nVidia when it comes to 4x multisampling.

Thanks.

is the window visible? That could be your prob.

For those testing on nVidia, yes after dropping the dib into windows using UpdateLayeredWindow, they can see that part of the dib that i edited using GDI, but they can’t see any of the openGL drawing, unless they use nVidia controls to completely disable the AA.

Yet the same code works on several ATI machines machines with no problems.

I really believe it is a setup issue, that the choice of extensions I am using work for ATI, but perhaps something special is required for nVidia that I am not aware of - that it probably needs different extensions to work, than those that currently work for ATI.

I am wondering if there is a difference setting up the multismapling pixel format and 4xAA for nvidia display adapters compared to ATI adapters

No, there isn’t.

Hmm, sounds a bit familiar. Do you make multiple glViewport calls with different arguments in your application? Remember that once a multisampled framebuffer is created, glViewport does not actually resize/reallocate the framebuffer, only the extends to which GL renders to. If the viewport you render to is bigger than the framebuffer resolution requested when the application started, nothing will be rendered there (i.e. pixel ownership failure). Likewise, glReadpixels will be reading outside the allocated framebuffer and you thus get garbage (or in this case black pixels).

There’s another thread that touches on this, you might want to read it: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=292700#Post292700

Thanks for your input.

I do set the viewport on each render pass, and this value might change. But even when the values are always constant i see this error.

And why would it work on ATI, but not nVidia?

I’m not using an explicit framebuffer, but a simple double buffered drawing context.

Dukey mentioned about the window being visible. This might be the issue. The gl window is a child window of the parent app, but is outside the area of the parent window. I could imagine that might be an issue (pixel visible testing), except that doing this works fine on ATI and intel display adapters.

I moved it outside of the area of the parent window because it was causing flickering on the desktop when the app was running on ATI (even though I wasn’t explicitly drawing anything on the front buffer, or doing any swapping), and was leaving stuff on my desktop.

Do this hidden rendering on an FBO, it should be much more robust.

I was doing it in an FBO previously, but there was no antialiasing. (I have to learn that some day)

Thanks for everyone’s help. I put in an nvidia card and now resolved the issue for nVidia by moving the child window within the ‘LayeredWindow’ area so the results are now visible, but I’m back to a previous issue, it flickers when drawing each frame.

But since the original issue is resolved mostly, I’ll consider this resolved and open a different issue if required.

Thanks again.

I was doing it in an FBO previously, but there was no antialiasing. (I have to learn that some day)

Totally go with V-man’s advice: doing it via framebuffer objects is better defined (and even cross platform). The jazz to look up is for GL, glTexImage2DMultisample and for GLSL, texelFetch and sampler2DMS.

You don’t need multisample textures just to get MSAA. You can use multisample renderbuffers, which are supported on more hardware. Just do a blit operation to the backbuffer and then swap buffers when you need to show it.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.