PDA

View Full Version : Multisample Details



Rodan
08-22-2006, 08:36 PM
Hi,
I'm trying to understand exactly how multisampling works. I get that there is some sort of pixel coverage value, that's "full" in some sense on the interior of a polygon, and "partial" on the edges.

So, let's say we have 4 samples per pixel, and 3 of the 4 are on the interior of the polygon edge. So 75% of the pixel's final color comes from the color of this polygon (say it's textured). Where exactly does the other 25% come from, if we only get 1 texture sample per pixel, and there is another textured polygon that covers the 4th sample?

Clearly I'm missing something, as each point sample (of which there are 4, in this case, per pixel) can have its own color, depth, etc., but not texture sample (of which there is only 1 per pixel). What is the difference between the color and the texture sample? Is the color just the polygon color, or what? The Red book sort of hand-waves over the details here (as do several other docs I've read through), which is a tad frustrating. Thanks!

Rodan

Jackis
08-22-2006, 11:04 PM
Actually, hardware gets so many texture samples, as the number ot total samples is, not pixels. That's why fragment shader is named "fragment", not "pixel".
When multisampling is enabled, your framebuffer gets magnified by all dimensions (in clearer words :) . Depth, stencil, color buffer are got magnified, BUT! the framebuffer (what you see on the screen) surely remains the same, and when rendering is done and buffer swapping is in process, your driver fits uplarged rendered image onto your screenbuffer.

Jan
08-23-2006, 01:07 AM
In other words, if have a 1024x768 sized framebuffer with 4x multisampling, you actually render to a
(1024*2)x(768*2) sized framebuffer, and later it is downsampled to the real screenresolution again.

Therefore you now have 4 times more detailed information per pixel that ends up on screen.

Jan.

ZbuffeR
08-23-2006, 02:03 AM
And only 1 texture sample is taken per triangle and per block of 4 fragments.

Rodan
08-23-2006, 08:22 AM
Jan,
But if that were the case, you'd get smoother edges in translucent textures, a problem which as I understand it multisampling does not solve.

ZbuffeR
08-23-2006, 08:29 AM
by translucent you mean alpha tested right ?
only 1 texture sample is taken per triangle and per block of 4 fragments.

Rodan
08-23-2006, 09:09 AM
I don't understand what you mean by 1 texture sample is taken per triangle and per block of 4 fragments. I would undertstand if it were taken per block of 4 fragments, but I don't get the "per triangle" part.

What I'm reading here sounds like supersampling, where an 800x600 is rendered into a 1600x1200 frame, and the 2x2 clusters of pixels are averaged and the result is put into the frame buffer. But I know it's not that simple, because there's still one texture sample, or shader run, per group of point samples.

The fact that point samples have their own depth, color, and stencil, but don't get their own texture samples confuses me... Are all 4 fragment samples within a pixel? If they weren't, I could see how a pixel could receive contributions from more than 1 texture. If they were, and the fragment group only gets 1 texture sample, I don't see how a pixel can receive contributions from more than 1 texture.

ZbuffeR
08-23-2006, 09:19 AM
ok : 1 texture sample per texture per triangle per final pixel.

It means if 2 triangles share a final pixel, each have two 2 fragment. all textures for triangle A are sampled *once* (ie. 1 shader run), same for triangle B, but each fragment got its own depth & stencil operation.

My understanding is that an implementation is free to do a texture sample per fragment, but as textures should not need extra filtering (with trilinear, aniso ...), it is cached.

Jackis
08-23-2006, 09:57 AM
ZbuffeR

hm, interesting. You say - 1 shader run. If it were so, then depth-modifying shaders wouldn't be able to show correct results...

Rodan
08-23-2006, 10:09 AM
Ahhh, ok, <light clicks on>, so it's not that it's one texture sample per pixel, _necessarily_ (although that would be the case if there was just one textured polygon completely covering that pixel).

And it's the fact that it's one texture sample per triangle per pixel that gives the antialiasing at the edges (where heterogeneous things sum up), but not in the center of a translucent poly, looking through to some other edge (where there's only one texture sample). Is that right?

zeoverlord
08-23-2006, 10:18 AM
Originally posted by Rodan:
Jan,
But if that were the case, you'd get smoother edges in translucent textures, a problem which as I understand it multisampling does not solve. I think the GF 7xxx series has something to counteract this.
But you can pretty mutch bet that it is slower than without.

Rodan
08-23-2006, 10:20 AM
Jackis, that was my understanding, that if you modified the depth value in a shader, you'd overwrite the multisampled value (and hence incorrect results). Here is a related thread:

http://www.opengl.org/discussion_boards/ubb/ultimatebb.php?ubb=get_topic;f=11;t=000784#000000

Humus
08-23-2006, 09:12 PM
Originally posted by Rodan:
So, let's say we have 4 samples per pixel, and 3 of the 4 are on the interior of the polygon edge. So 75% of the pixel's final color comes from the color of this polygon (say it's textured). Where exactly does the other 25% come from, if we only get 1 texture sample per pixel, and there is another textured polygon that covers the 4th sample?If your triangle covers three samples, the result from the fragment shader will be written to those three samples. The fourth sample will be left untouched. Another triangle may later cover the fourth sample and thus writes to that sample. The blending of the samples happens at SwapBuffers(). This is called resolving the buffer, and the averaged samples are written out to a non-multisamped buffer.

Basically, multisampling is like rendering normally, except you have multiple samples per pixel. Normally a fragment gets rendered for all pixel centers that are covered. In multisampling a fragment is rendered for each pixel where one or more samples are covered. The value is written to the covered samples. The depth buffer also has multiple samples, and a depth test is done for each sample.

Humus
08-23-2006, 09:18 PM
Originally posted by Jackis:
Actually, hardware gets so many texture samples, as the number ot total samples is, not pixels. That's why fragment shader is named "fragment", not "pixel".This whole "texture" thing is confusing in the context of multisampling, unless your shader only does plain texturing. It's the fragment shader that's run once per pixel with multisampling, whereas it's run once per sample with supersampling. There's no direct correlation between the number of texture samples and multisample samples.

It's called a fragment shader because it shades fragments and not pixels. Pixels are in the buffer, fragments flow down the pipeline.

plasmonster
08-23-2006, 09:24 PM
There's a wonderful discussion of supersampling and multisampling in Real-time Rendering, 2nd Ed.

As a basis for a better understanding of multisampling, you may want to look at Carpenter's A-Buffer, for example.

There's also the OpenGL multisample extension specification, for some implementation details.

Humus
08-23-2006, 09:25 PM
Originally posted by Jan:
In other words, if have a 1024x768 sized framebuffer with 4x multisampling, you actually render to a
(1024*2)x(768*2) sized framebuffer, and later it is downsampled to the real screenresolution again.

Therefore you now have 4 times more detailed information per pixel that ends up on screen.

Jan. I wouldn't say it's (1024*2)x(768*2), it's more like 1024x768x4. Describing it as twice the size in both dimensions makes sense only if you do ordered grid sampling, which no modern hardware does. If you have programmable sample positions (like ATI does) it doesn't make sense even for 4x. I mean, 6x is not 3x in one direction and 2x in the other. The pixels are still square.

Humus
08-23-2006, 09:27 PM
Originally posted by Rodan:
Jan,
But if that were the case, you'd get smoother edges in translucent textures, a problem which as I understand it multisampling does not solve. The problem with alpha testing is that it kills fragments. Since there's only one fragment per pixel (for a particular triangle), if you kill that one none of the samples are written. A better solution is to use alpha-to-coverage. Then the alpha value will be mapped to a coverage value so that if you write 0.75 to alpha it will cover three random samples. Well, not quite random, but selected according to some dither pattern.

Rodan
08-24-2006, 07:56 AM
Humus, yes, yes, that makes sense! Thanks. So, if I have a translucent polygon, but use alpha-to-coverage, then an interior pixel will only have some of its samples written to by the shader run for the translucent polygon. Doesn't that mean a polygon behind it, somewhat visible due to the translucency (alpha) of the occluding polygon, will then contribute to the remaining samples in the pixel? And if so, wouldn't I get some antialiasing for edges visible through the translucent polygon?

The reason I ask is that I thought it was not possible with multisampling to get AA for edges visible through a translucent polygon.

Humus
08-24-2006, 05:39 PM
That's true, so you'll indeed get antialiasing. I demonstrated this in this demo:
http://www.humus.ca/index.php?page=3D&ID=61

Rodan
08-25-2006, 08:42 AM
Yes, I took a look at it, thanks!

I wrote a little test program, that draws an opaque red quad, then draws a larger translucent quad (using a texture with, say, 127 alpha everywhere) over the red quad.

What I don't understand (after reading all of this) is why I *am* getting antialiasing on the edge of the red quad, as seen through the occluding translucent quad.

It seems to me multisampling should *not* be antialiasing the red quad, because the translucent quad has complete coverage over edge pixels of the red quad, and I haven't enabled any modes that tell GL to take the alpha into consideration. So my only remaining question is, "How am I getting AA on the edges of polygons *behind* a translucent polygon?"

Note that this is a slightly different than having geometry, like a fence, "painted" into a texture. Here, I've got real geometry behind an alpha texture of constant alpha value, and somehow I'm getting AA. (I'm using alpha blend for this, by the way, but with a single alpha value, I shouldn't get any AA)

ZbuffeR
08-25-2006, 08:57 AM
AA works at the edges. It is when alpha testing within a polygon taht you see aliasing.

Rodan
08-25-2006, 12:51 PM
... right, my question is *why* AA works along the edge of a polygon that is completely covered by another (translucently texture mapped) polygon. My understanding leads me to believe that since the translucent polygon has entire coverage over *all* samples on its interior, how does AA occur along the underneath polygon edges, visible through the translucent polygon's interior?

I realize this is sort of a derivative question with regards to the earlier discussion of aliasing and alpha testing, I just didn't want to start a whole new thread.

ZbuffeR
08-25-2006, 02:37 PM
The edges of a polygon are antialiased, whatever any other polygon (maybe translucent) will be drawn on top of it. This is not coverage, this is how multisampling basically works.

Korval
08-25-2006, 05:27 PM
So my only remaining question is, "How am I getting AA on the edges of polygons *behind* a translucent polygon?"Because blending doesn't affect the results of multisample.

OK, when you draw the first quad, you get various samples on the edge that say, "the color red on this pixel was seen 3 of the 6 times, and the background (say, black) was seen the rest of the time," and stuff of that nature.

So, then you blend a polygon on top of this. The blend changes the pixel value to, "The blend result of red and this color was seen 3 times, and the blend result of the background and this color were seen the rest of the time." That's why it still works; blending blends with all of the colors (or, where at the edge of the blended triangle, only with the appropriate samples underneath).

In essence, multisample is supersampling, except that the fragment program is only run once per actual pixel. So it figures out which samples from the sample pattern (whether regular grid or whatever) are being written to by the fragment program and which aren't. Then it performs pixel operations for each of them.

Rodan
08-26-2006, 04:20 PM
Oh I see, so the "resolving the buffer" part at the end is really resolving fragments that have already had alpha blending applied to them. Thanks for the explanation!