How to implement 24x AA on Radeon 4850?

In the CCC 3D settings I can choose the MSAA filter between Box, Narrow tent, Wide tent and Edge detect.
When selecting edge detect the samples number is listed as 24. These 24 are actual samples, or just taps (like 8 samples 24 taps)?
I’d like to know how to achieve this level of MSAA quality in my programs. IIRC the older Radeons used shaders for MSAA.
The maximum samples is only maxaa = 8, as filled by the glGetIntegerv( GL_MAX_SAMPLES, &maxaa ).
Is there a multisample filter hint for ATI (for nVidia it is GL_MULTISAMPLE_FILTER_HINT_NV = 0x8534)?
I already implemented SSAA using downsampling with glBlitFramebufferEXT, but I want to get maximum possible MSAA to speed up the operation.

These 12x and 16x modes use samples from adjacent pixels. In this respect it’s conceptually like Quincunx.

There are still only 8 samples generated at most per pixel with 12x and 16x mode.

This is arguably the right thing to do from a sampling theory point of view, but will convolve the scene slightly (and again that’s arguably the right thing to do despite what some pixel peeper might conclude), especially when things start to move. I proposed something similar years before while at SGI.

It’s not clear what edge detect does, it may be additional samples per pixel where edges are detected in the scene, but this would kinda be what MSAA is by definition anyway vs. Supersample. I could guess but would rather not publish ideas here.

The secret sauce could be anything but it’s a safe bet it’s not raw full 24 samples given what 12x and 16x are. And they’ll probably never tell you.

Gamma correct sample summation is many times more important than sample count and the adaptive alpha derivative MSAA sample are also way more important for real content quality.

here is an article about the AMD edge detect filter algorithm:
http://developer.amd.com/gpu_assets/AA-HPG09.pdf

there is currently no API to force this filter to be used, and the global override in the control panel is the only mechanism.

Thank you for the answers! I read the article and it seems this is the theory behind the 24 levels of gradation, not samples.
Samples stay at 8 and they are using shaders to perform “stochastic integration using samples from a 3x3 pixel neighborhood”.
Pity that it’s not exposed in any API, like the nVidia multisample hint is.

Tzupy, it’s a bit better than your summary suggests.

Having read the paper, this approach attempts to more accurately assign a weighting using available multisample mask information. To do this it reconstructs the edge location using the multisample information from adjacent pixels. It’s not merely pulling in new samples from adjacent pixels.

It uses the multisample information from extended pixels to more accurately determine edge location and uses this reconstructed edge position to assign weights for the multisample colors within the pixel. So it improves weighting using adjacency without the convolution of ‘tent’ or quincunx.

Since MSAA doesn’t necessarily shade each supersample but can assign weighting to single aggregate color samples (and often must for quality reasons, see center vs. centroid texel & related issue), it’s NOT a reasonable criticism to say “samples stay at 8”. The actual color samples are independent of spatial samples and colors remain unconvolved while spatial samples from adjacent pixels are used for edge reconstruction.

It’s pretty darned clever.

Where it doesn’t offer an improvement is those situations that tend to need improvement the most, most notably sliver polygons, and the authors mention this. Other issues would be corners and complex pixels, but those would be no worse than standard MSAA.

I like it. Unfortunately it’s getting higher quality AA a bad rap, possibly because pixel peepers are looking at thin stuff to evaluate this. When you look at a thin object on screen (e.g. a telegraph cable in a 3D scene as I saw in one evaluation) it’s probably unlikely you ever really have pixels with only 1 edge even if the cable is wider than a pixel simply due to modeling.

Well, sorry for suggesting that it’s not so good, it’s definitely good, but not for me, since I can’t access it.
To implement a similar AA enhancement using shaders like they say in the paper is way beyond my current knowledge, maybe in 1 year I’ll try.
I said ‘samples stay at 8’ because I believe that the multisampling is still using 8 samples, which is GOOD to keep the framebuffer size within available limits.
It is unclear to me how much memory this approach uses, besides what’s needed for the 8x multisampling.
In another thread that I started and got no answer, I’m probably running out of memory when using 8x multisampled renderbuffers.
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=273807#Post273807

for the adaptive 24xaa, at most 8 samples are stored. (so the size of your fbo is widthheight8 *pixelsizeinbytes.

in the general case of one edge intersecting a pixel, then you only have 2 samples stored anyway thanks to proprietary color compression hardware.