Hi, I’ve just done a bit of experimenting with antialiasing levels and resolutions for the app I’m working on, and was wondering if someone can explain the behaviour I’m seeing.
Basically, my framerate is independant of resolution (which is to be expected, there’s little overdraw), but slows down as I up the anti-aliasing level. The app is running in a window, so this may have something to do with it.
Now, I would have expected anti-aliasing to have a hit based on fillrate, so I’m a little confused why this happens.
The equivalent resolution (from 800x600) of 2x antialiasing is 1200x900, or 1280x960 to get a number you can actually use.
The equivalent of 4x is 1600x1200. Obviously, 16x would be 3200x2400.
That being said, antialiasing isn’t just about fillrate. There are, presumably, computations going on per-pixel. At the very least, there are blending computations going on. They, for a given resolution and antialiasing level, will take a constant amount of time regardless of the rendering time. Maybe 2x antialiasing requires an additional, say, .5 milliseconds to do whatever it needs to. Maybe 8x needs 2-4ms to do it’s computations.
What kind of hardware are you using? Different hardware supports antialiasing in different ways. Also, how are you performing the antialiasing? Are you using multisampling?
Well you could be hitting a memory bandwidth limit when doing AA but that seems unlikely since youre not slowing down even in the higher resolutions. It’s also worth noting that classic SGI-style multi sample AA only takes multiple z and stencil samples per pixel, not color. So you might run into limitations with z/stencil fill. The GeForce FXs can do roughly twice the amount of pure z/stencil pixels per pass compared to pixles that contain color IIRC.