super-sampling

As I understand it this method of anti-aliasing involves rendering the scene at a higher resolution and the reducing it back down to the screen…

All very nice in theory, but how do I do it in OpenGL???

does anybody have any example code?

thanks

considering it is slow in hardware I wouldn’t bother doing it in software.

[This message has been edited by Tim Stirling (edited 03-07-2001).]

You could try rendering the screen at double resolution and then getting the image information and downsizing it than the redisplaying it. But it would be slooooooow.

Hardware supersampling is controlled by drivers, not OpenGl.

j

You can control hardware supersampling anti-aliasing through openGL if you use the accumulation buffer.

For 2x2 supersampling:
Simply draw the image once with an image plane (persepctive matrix, etc.) set up so that the pixels it renders correspond to the upper left of each pixel that you eventually want.
Then draw the image again, with the image plane shifted right 1/2 pixel (upper right of the pixel). Draw again shifted down 1/2 pixel (lower right) and then again shifted left 1/2 pixel (lower left).

Using the accumulation buffer to weight and sum these images will give you a 2x2 supersamped image (but it is slow – but should be faster than software (depends on the accumulation buffer implementation)).

Ah, but that’s the problem - accumulation buffers aren’t supported in software on any card the average person is likely to have.

Even a GeForce3 doesn’t have an accumulation buffer - just the ARB_multisample extension, which could do what you said, but doesn’t have the full accumulation buffer functionality.

Correct me if I’m wrong about this.

j

> accumulation buffers aren’t supported in software on any card the average person is likely to have.

You’r wrong on that one. Most OpenGL drivers support accumulation buffers in software, but no consumer card have them hardware accelerated. Which makes them pretty useless for any real time rendering.

Sorry, what am I saying? In software!

Now I feel really stupid…

What I meant to say was in hardware, but I guess I was thinking something else when I typed that.

So yeah, no hardware accumulation buffers on most consumer video cards.

j

If performance is not one of the poster’s concerns the accumulation buffer may well be a viable solution. It could still be useful for rendering still frames.
I would not advise shifting by 1/2 a pixel though, non-uniform sampling should give better results. The MS SDK help files used to contain example code on jitter-placed, accumulation buffer antialiasing… (still do?)

But even if performance is not a concern the glAccum calls can take so long to return that it is anyway a problem. You could solve this using glScissor to accum regions of the viewport rather than the whole thing at once.


Using the accumulation buffer to weight and sum these images will give you a 2x2 supersamped image (but it is slow – but should be faster than software (depends on the accumulation buffer implementation)).

You can use destination alpha and render the same image to the frame buffer four times. How successful this approach is will depend on how heavily you already use the texture units, in addition to how well (if at all) the card supports destination alpha.

Turning on anti-aliasing in the driver control panel (or using an extension) is probably going to give you a faster implementation, on cards that support it.