Multisampled Depth Renderbuffer

Hi. I’m trying to render some lines to a texture. I need them antialiased, so I first have to create multisampled color and depth renderbuffers, attach them to an FBO, and render the lines. Since multisampled FBOs cannot have texture attachments, I then have to blit the multisampled FBO to a plain old FBO with texture attachments.

This works excellently for the color buffer, and I get the following magnified result, which is antialiased:

However, the depth buffer does not appear to be multisampled:

The code that does all this is:


   // Create a multisampled color buffer.
   GLuint msaa_color;
   glGenRenderbuffersEXT(1, &msaa_color);
   glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, msaa_color);
   glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER_EXT, 8, GL_RGBA8,
                                       FBO_SIZE, FBO_SIZE);

   // Create a multisampled depth buffer.
   GLuint msaa_depth;
   glGenRenderbuffersEXT(1, &msaa_depth);
   glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, msaa_depth);
   glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER_EXT, 8,
                                       GL_DEPTH_COMPONENT, FBO_SIZE,
                                       FBO_SIZE);

   // Create a multisampled fbo and attached the color and depth buffers.
   GLuint msaa_fbo;
   glGenFramebuffersEXT(1, &msaa_fbo);
   glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, msaa_fbo);
   glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
                                GL_RENDERBUFFER_EXT, msaa_color);
   glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
                                GL_RENDERBUFFER_EXT, msaa_depth);

   // draw stuff

   // Create an fbo for blitting the multisampled fbo to a texture.
   GLuint fbo;
   glGenFramebuffersEXT(1, &fbo);
   glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);

   // Draw into the texture.
   glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
                             GL_TEXTURE_2D, lines_tex_id, 0);
   glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
                             GL_TEXTURE_2D, depth_tex_id, 0);

   // Make the multisampled fbo the source and the texture fbo the target.
   glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, msaa_fbo);
   glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fbo);

   glBlitFramebufferEXT(0, 0, FBO_SIZE, FBO_SIZE, 0, 0, FBO_SIZE, FBO_SIZE,
                        GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_NEAREST);

To get the images, I retrieve the textures and save them out to a file. My FBOs are framebuffer complete.

Should not the depth texture also be antialiased? Or is multisampling not actually performed for a depth buffer?

I would sincerely appreciate any insight. Let me know if I can provide more information. My graphics cards are an NVIDIA 7300 GeForce Go and a Quadro 3450 with current drivers. Both show the same results.

  • Chris

No. There is not much point in averaging depth values of neighbouring samples, it just doesn’t produce meaningful results.

Or is multisampling not actually performed for a depth buffer?

The multisampled depth buffer contains multiple depth samples per pixel. How exactly would you want them to be reduced to a single value?

No. There is not much point in averaging depth values of neighbouring samples, it just doesn’t produce meaningful results.
[/QUOTE]

Why does it make sense to average color and not depth? Pixels that were the clear value before suddenly are part of a line with multisampling. Should they not have a depth if they’re part of the line?

If I’m using a perspective projection, I agree that the average doesn’t make sense. But why not for an orthogonal projection?

  • Chris

Why don’t you use GL_LINE_SMOOTH, if you’re only drawing lines?

Why does it make sense to average color and not depth?

Because the result is not meaningful. Think about it.

If a pixel is made of 2 samples of depth 0.5, 2 samples of depth 0.6, and one sample of depth 0.2, what depth is the pixel?

It certainly isn’t the weighted average of the samples. That would wind up with a depth of 0.48, which isn’t the depth of anything you rendered into that pixel.

If you were to render something on your post-blit depth buffer with depth 0.49, it would overwrite the color, which makes no sense in terms of what you originally rendered. After all, if you did it to the original multisample buffer, it would write over only the 0.2 depth value; it would combine its samples with the 0.5 and 0.6 samples, making the final value a combination of samples.

In short, after doing the multisample reduction, the depth buffer cannot make any form of sense relative to what was originally rendered. So the implementation picks one of the depth values (possibly the largest?) and uses that for the entire pixel.

While a little off-topic, if the lines are only for 2D and if you got CPU to spare you could have a look at Anti-Grain.

(I’ve myself used it to render 2D lines to texture, and was I in for a quality increase surprise! :slight_smile: )

While a little off-topic, if the lines are only for 2D and if you got CPU to spare you could have a look at Anti-Grain.

Too bad it’s only released under GPL, which means that you can only use it as a library in other GPL’d code. It’s not even LGPL, which would let you use it as a .dll in non-GPL code.

I understand all this, but my question is how does it make any more sense to average the color then? I could apply your same argument to the multisampling of color. The averaged color doesn’t actually occur. It’s neither the clear color, nor the line’s color. The operation, however, creates an effect that we want so it seems logical to us. We’re coloring pixels that aren’t truly on the line in a way that represents neither line nor background. How is it then that similarly assigning these false pixels a depth is so silly?

I understand all this, but my question is how does it make any more sense to average the color then? I could apply your same argument to the multisampling of color.

It makes sense because that’s what you asked to do when you started the whole multisample process. It is understood that this is what you were interested in doing by even creating a multisampled color buffer.

More importantly, the value of a color does not change, for example, how things are rendered. The value of the depth buffer does. A “meaningless” color value can still look correct; a meaningless depth buffer is never correct. Even if it is what you expect, I can’t imagine a circumstance where it is ever what you want. That is, I can’t imagine how it would ever be useful in any way.

i am interested in a possibility to get averaged depth values from a muti-sampled buffer, too.

there are some applications (especially in postprocessing effects) for averaging depth values:

i.e. for using depth-based blur for simulating scattering trough large amount of atmosphere,
sure its only an approximation, because the depth-2-blurstrength function is no homomorphism,
but still its better than the maximum minimum w/e.

so is there a way to get averaged depthsamples (except a redundant rendering of depth to an extra channel in a normal texture)?

or is there a possibilty to directly sample from the multisampled depthbuffer (and get the averaging by using GL_LINEAR filtering with 2x2 multisampling i.e. or even better to use a custom shader-based resolve tailored for the specific application)?

for using depth-based blur for simulating scattering trough large amount of atmosphere, sure its only an approximation, because the depth-2-blurstrength function is no homomorphism, but still its better than the maximum minimum

Um, that doesn’t work. Because the depth buffer is non-linear, an averaged value won’t even correspond to the actual average depth.

so is there a way to get averaged depthsamples

No.

except a redundant rendering of depth to an extra channel in a normal texture

That wouldn’t work anyway, since you can’t read from a multisample texture (which is why there are no multisampled textures; only multisampled renderbuffers).

i need it for a deferred shading rendering approach
cause i have a lot of near pixel-sized triangles,
which results in poor fragment shader performance,
i try to gain speed
by moving lighting and fogging calculations
to a fullscreen quad postprocessing step.
i am looking for ways to integrate multi-sampling,
cause the postprocessing operates on the downsampled buffer, there is no way to correctly determine a fogging factor for a single pixel, because it may be composed of multiple depths.
the point is,
the averaged non-linear depth can be useful,
and maybe more than the actual average depth,
for getting an approximation to the correct antialiased fogging factor.

i meant by rendering depth to a multisample color buffer,
and downsampling it.

btw,
dx 10.0 allows reading from a multisampled texture
dx 10.1 allows reading from a multisampled depthbuffer
i had hope there’s a way in gl

AFAIK, in DX10 yes, but not in openGL. But even in DX10 there is noting to get natively averaged depth values ( or this is something I’m not aware of ).

However, I still can’t see the point of doing that… Multisampled colors are useful and do make point since it produces softer, “blended” edges, but multisampled depth ?
I mean, I can see the point of getting per-sample depth values ( post effects and so on ) ; but having such a “resolve operator” seems quite weird to me.

only to save shadercycles, having a
fast custom resolve with a per-pixel postprocessing may be faster than
a per-sample postprocessing and then a standard resolve to a pixel.

but i found an other more general applicable approach,
that switch from per-sample postprocessing to per-pixel processing, if within a pixel the depths dont differ too much.
so the costly shader path is only applied to edges of polygons.

in DX10 you can get the depth gradients using ddx and ddy ; Is there something like that in openGL ?

Yes, dFdx() and dFdy().

i need it for a deferred shading rendering approach

Tough. Deferred shading and multisampling are mutually exclusive.

the averaged non-linear depth can be useful,

No, it can’t. It would make all of your lighting completely broken.

BTW, you don’t have to manually line wrap your lines; the browser will do it for us.

>>>>so is there a way to get averaged depthsamples
>>No.

surely u can reverse the depth calculation
do the average

  • then calc this value in opengl ‘zspace’

Though for lines only you would be better off doing the AA in a shader (the results will be far higher quality)
had a quick google
http://people.csail.mit.edu/ericchan/articles/prefilter/
I dont know how good this is though

they are not, i.e. refer to
http://ati.amd.com/developer/gdc/2008/DirectX10.1.pdf

Such global statements without any argument dont help very much,
I already described my approach in previous posts,
I am too lazy to do it again…

http://ati.amd.com/developer/gdc/2008/DirectX10.1.pdf

Then use D3D and stop complaining.

Such global statements without any argument dont help very much,

I assumed that you could figure it out for yourself. Lighting often uses the distance between the object and the light. If that distance is wrong, then the lighting computations will be wrong.