Slow glcopy with active antialiasing

Hi,
i have tremendous performance hits, when i use glcopypixels with active FSAA (either driver-driven or via multisampling extension). I tried to disable multisampling before blitting, but performance keeps dropping from 24hz to 0.85hz (SW-rendering?) on a Gforce FX 5900. I tried to blit after swap, but it showed the same thing.
I want to draw in the backbuffer, then blit in the backbuffer for edgeblending and then swap. I guess the driver averages all the pixels before i blit, but i do not want it do so. He should just blit, nothing else. I use NVIDIA Driver 52.26 and disable ALL fragment operations before blitting.
Without antialiasing there are no costs for my blitting operations.
Any ideas?
THX, Valentin

I had the same problem with readpixels and FSAA for a while but the problem went away with newer drivers (or was it when i upgraded to a GF5900u, I’m not sure)

Here’s the post I made http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/008837.html

[This message has been edited by Adrian (edited 11-19-2003).]

In fact my test app in the other thread now crashes when FSAA is enabled. It never used to and I havent changed it. After some debugging I’ve found out that creating a fullscreen window using glut without calling glutInitWindowSize causes readpixels to crash but only when FSAA is on.

[This message has been edited by Adrian (edited 11-19-2003).]

>>I want to draw in the backbuffer, then blit in the backbuffer for edgeblending and then swap.<<

Can you explain that a little more?
What are you blitting from, to, how?
What’s the graphcis state while copying (depth test off?)?

Originally posted by Relic:
[b]>>I want to draw in the backbuffer, then blit in the backbuffer for edgeblending and then swap.<<

Can you explain that a little more?
What are you blitting from, to, how?
What’s the graphcis state while copying (depth test off?)?[/b]

We want to do a fast edgeblending. We use a GForce FX5900 with 2 projectors plugged, showing one desktop. For edgeblending we need a overlapping region in the middle of our framebuffer with two rectangular regions with alpha-ramps, blending in and out. By overlapping the projectors we get edgeblending with only rendering pass. It works nice, but once we activate (which is what we want) FSAA, Performance dropps.

Here is what we do:
We render our scene to the backbuffer. Then we adjust the viewport and disable alle Fragment operations (incl. CG-Shader, Textures, depth and stencil tests, etc.):

// set the cheapest opengl state
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glDisable(GL_DITHER);
glDisable(GL_STENCIL_TEST);
glDisable(GL_ALPHA_TEST);
glDisable(GL_LIGHTING);

glDisable(GL_MULTISAMPLE_ARB);
glDisable(GL_SAMPLE_COVERAGE_ARB);

glDisable(GL_FOG);

// disable all texture units
int myTextureCount;
glGetIntegerv(GL_MAX_TEXTURE_UNITS_ARB, &myTextureCount);
for (unsigned myTextureCounter = 0;
myTextureCounter < myTextureCount; ++myTextureCounter) {
glActiveTextureARB(getTextureUnitId(myTextureCounter));
glClientActiveTextureARB(getTextureUnitId(myTextureCounter));
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}

  • Then we make a glCopy from Back To Backbuffer (we tried a read and write combination in processormemory, videomem, and agpmem, it helped a little, it seemed, that it did not switch to sw-rendering, but still to slow to use).
  • Render the blending ramps
  • Swapbuffer

THX, for help

Add
glDisable(GL_BLEND);
just to make sure.

Originally posted by zeckensack:
Add
glDisable(GL_BLEND);
just to make sure.

I did disable blending, forgot to add it in the snippet. But Thanks