Multisample Details

Hi,
I’m trying to understand exactly how multisampling works. I get that there is some sort of pixel coverage value, that’s “full” in some sense on the interior of a polygon, and “partial” on the edges.

So, let’s say we have 4 samples per pixel, and 3 of the 4 are on the interior of the polygon edge. So 75% of the pixel’s final color comes from the color of this polygon (say it’s textured). Where exactly does the other 25% come from, if we only get 1 texture sample per pixel, and there is another textured polygon that covers the 4th sample?

Clearly I’m missing something, as each point sample (of which there are 4, in this case, per pixel) can have its own color, depth, etc., but not texture sample (of which there is only 1 per pixel). What is the difference between the color and the texture sample? Is the color just the polygon color, or what? The Red book sort of hand-waves over the details here (as do several other docs I’ve read through), which is a tad frustrating. Thanks!

 Rodan

Actually, hardware gets so many texture samples, as the number ot total samples is, not pixels. That’s why fragment shader is named “fragment”, not “pixel”.
When multisampling is enabled, your framebuffer gets magnified by all dimensions (in clearer words :slight_smile: . Depth, stencil, color buffer are got magnified, BUT! the framebuffer (what you see on the screen) surely remains the same, and when rendering is done and buffer swapping is in process, your driver fits uplarged rendered image onto your screenbuffer.

In other words, if have a 1024x768 sized framebuffer with 4x multisampling, you actually render to a
(10242)x(7682) sized framebuffer, and later it is downsampled to the real screenresolution again.

Therefore you now have 4 times more detailed information per pixel that ends up on screen.

Jan.

And only 1 texture sample is taken per triangle and per block of 4 fragments.

Jan,
But if that were the case, you’d get smoother edges in translucent textures, a problem which as I understand it multisampling does not solve.

by translucent you mean alpha tested right ?
only 1 texture sample is taken per triangle and per block of 4 fragments.

I don’t understand what you mean by 1 texture sample is taken per triangle and per block of 4 fragments. I would undertstand if it were taken per block of 4 fragments, but I don’t get the “per triangle” part.

What I’m reading here sounds like supersampling, where an 800x600 is rendered into a 1600x1200 frame, and the 2x2 clusters of pixels are averaged and the result is put into the frame buffer. But I know it’s not that simple, because there’s still one texture sample, or shader run, per group of point samples.

The fact that point samples have their own depth, color, and stencil, but don’t get their own texture samples confuses me… Are all 4 fragment samples within a pixel? If they weren’t, I could see how a pixel could receive contributions from more than 1 texture. If they were, and the fragment group only gets 1 texture sample, I don’t see how a pixel can receive contributions from more than 1 texture.

ok : 1 texture sample per texture per triangle per final pixel.

It means if 2 triangles share a final pixel, each have two 2 fragment. all textures for triangle A are sampled once (ie. 1 shader run), same for triangle B, but each fragment got its own depth & stencil operation.

My understanding is that an implementation is free to do a texture sample per fragment, but as textures should not need extra filtering (with trilinear, aniso …), it is cached.

ZbuffeR

hm, interesting. You say - 1 shader run. If it were so, then depth-modifying shaders wouldn’t be able to show correct results…

Ahhh, ok, <light clicks on>, so it’s not that it’s one texture sample per pixel, necessarily (although that would be the case if there was just one textured polygon completely covering that pixel).

And it’s the fact that it’s one texture sample per triangle per pixel that gives the antialiasing at the edges (where heterogeneous things sum up), but not in the center of a translucent poly, looking through to some other edge (where there’s only one texture sample). Is that right?

Originally posted by Rodan:
Jan,
But if that were the case, you’d get smoother edges in translucent textures, a problem which as I understand it multisampling does not solve.

I think the GF 7xxx series has something to counteract this.
But you can pretty mutch bet that it is slower than without.

Jackis, that was my understanding, that if you modified the depth value in a shader, you’d overwrite the multisampled value (and hence incorrect results). Here is a related thread:

http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=11;t=000784#000000

Originally posted by Rodan:
So, let’s say we have 4 samples per pixel, and 3 of the 4 are on the interior of the polygon edge. So 75% of the pixel’s final color comes from the color of this polygon (say it’s textured). Where exactly does the other 25% come from, if we only get 1 texture sample per pixel, and there is another textured polygon that covers the 4th sample?
If your triangle covers three samples, the result from the fragment shader will be written to those three samples. The fourth sample will be left untouched. Another triangle may later cover the fourth sample and thus writes to that sample. The blending of the samples happens at SwapBuffers(). This is called resolving the buffer, and the averaged samples are written out to a non-multisamped buffer.

Basically, multisampling is like rendering normally, except you have multiple samples per pixel. Normally a fragment gets rendered for all pixel centers that are covered. In multisampling a fragment is rendered for each pixel where one or more samples are covered. The value is written to the covered samples. The depth buffer also has multiple samples, and a depth test is done for each sample.

Originally posted by Jackis:
Actually, hardware gets so many texture samples, as the number ot total samples is, not pixels. That’s why fragment shader is named “fragment”, not “pixel”.
This whole “texture” thing is confusing in the context of multisampling, unless your shader only does plain texturing. It’s the fragment shader that’s run once per pixel with multisampling, whereas it’s run once per sample with supersampling. There’s no direct correlation between the number of texture samples and multisample samples.

It’s called a fragment shader because it shades fragments and not pixels. Pixels are in the buffer, fragments flow down the pipeline.

There’s a wonderful discussion of supersampling and multisampling in Real-time Rendering, 2nd Ed.

As a basis for a better understanding of multisampling, you may want to look at Carpenter’s A-Buffer, for example.

There’s also the OpenGL multisample extension specification, for some implementation details.

Originally posted by Jan:
[b] In other words, if have a 1024x768 sized framebuffer with 4x multisampling, you actually render to a
(10242)x(7682) sized framebuffer, and later it is downsampled to the real screenresolution again.

Therefore you now have 4 times more detailed information per pixel that ends up on screen.

Jan. [/b]
I wouldn’t say it’s (10242)x(7682), it’s more like 1024x768x4. Describing it as twice the size in both dimensions makes sense only if you do ordered grid sampling, which no modern hardware does. If you have programmable sample positions (like ATI does) it doesn’t make sense even for 4x. I mean, 6x is not 3x in one direction and 2x in the other. The pixels are still square.

Originally posted by Rodan:
Jan,
But if that were the case, you’d get smoother edges in translucent textures, a problem which as I understand it multisampling does not solve.

The problem with alpha testing is that it kills fragments. Since there’s only one fragment per pixel (for a particular triangle), if you kill that one none of the samples are written. A better solution is to use alpha-to-coverage. Then the alpha value will be mapped to a coverage value so that if you write 0.75 to alpha it will cover three random samples. Well, not quite random, but selected according to some dither pattern.

Humus, yes, yes, that makes sense! Thanks. So, if I have a translucent polygon, but use alpha-to-coverage, then an interior pixel will only have some of its samples written to by the shader run for the translucent polygon. Doesn’t that mean a polygon behind it, somewhat visible due to the translucency (alpha) of the occluding polygon, will then contribute to the remaining samples in the pixel? And if so, wouldn’t I get some antialiasing for edges visible through the translucent polygon?

The reason I ask is that I thought it was not possible with multisampling to get AA for edges visible through a translucent polygon.

That’s true, so you’ll indeed get antialiasing. I demonstrated this in this demo:
http://www.humus.ca/index.php?page=3D&ID=61

Yes, I took a look at it, thanks!

I wrote a little test program, that draws an opaque red quad, then draws a larger translucent quad (using a texture with, say, 127 alpha everywhere) over the red quad.

What I don’t understand (after reading all of this) is why I am getting antialiasing on the edge of the red quad, as seen through the occluding translucent quad.

It seems to me multisampling should not be antialiasing the red quad, because the translucent quad has complete coverage over edge pixels of the red quad, and I haven’t enabled any modes that tell GL to take the alpha into consideration. So my only remaining question is, “How am I getting AA on the edges of polygons behind a translucent polygon?”

Note that this is a slightly different than having geometry, like a fence, “painted” into a texture. Here, I’ve got real geometry behind an alpha texture of constant alpha value, and somehow I’m getting AA. (I’m using alpha blend for this, by the way, but with a single alpha value, I shouldn’t get any AA)