PDA

View Full Version : Multisampled Depth Renderbuffer



kaerimasu
10-10-2008, 10:55 AM
Hi. I'm trying to render some lines to a texture. I need them antialiased, so I first have to create multisampled color and depth renderbuffers, attach them to an FBO, and render the lines. Since multisampled FBOs cannot have texture attachments, I then have to blit the multisampled FBO to a plain old FBO with texture attachments.

This works excellently for the color buffer, and I get the following magnified result, which is antialiased:

http://www.cs.utk.edu/~cjohnson/forothers/msaa_color.png

However, the depth buffer does not appear to be multisampled:

http://www.cs.utk.edu/~cjohnson/forothers/msaa_depth.png

The code that does all this is:



// Create a multisampled color buffer.
GLuint msaa_color;
glGenRenderbuffersEXT(1, &msaa_color);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, msaa_color);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFE R_EXT, 8, GL_RGBA8,
FBO_SIZE, FBO_SIZE);

// Create a multisampled depth buffer.
GLuint msaa_depth;
glGenRenderbuffersEXT(1, &msaa_depth);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, msaa_depth);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFE R_EXT, 8,
GL_DEPTH_COMPONENT, FBO_SIZE,
FBO_SIZE);

// Create a multisampled fbo and attached the color and depth buffers.
GLuint msaa_fbo;
glGenFramebuffersEXT(1, &msaa_fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, msaa_fbo);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_RENDERBUFFER_EXT, msaa_color);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
GL_RENDERBUFFER_EXT, msaa_depth);

// draw stuff

// Create an fbo for blitting the multisampled fbo to a texture.
GLuint fbo;
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);

// Draw into the texture.
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, lines_tex_id, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
GL_TEXTURE_2D, depth_tex_id, 0);

// Make the multisampled fbo the source and the texture fbo the target.
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, msaa_fbo);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fbo);

glBlitFramebufferEXT(0, 0, FBO_SIZE, FBO_SIZE, 0, 0, FBO_SIZE, FBO_SIZE,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_NEAREST);


To get the images, I retrieve the textures and save them out to a file. My FBOs are framebuffer complete.

Should not the depth texture also be antialiased? Or is multisampling not actually performed for a depth buffer?

I would sincerely appreciate any insight. Let me know if I can provide more information. My graphics cards are an NVIDIA 7300 GeForce Go and a Quadro 3450 with current drivers. Both show the same results.

- Chris

Xmas
10-10-2008, 11:07 AM
Should not the depth texture also be antialiased?
No. There is not much point in averaging depth values of neighbouring samples, it just doesn't produce meaningful results.


Or is multisampling not actually performed for a depth buffer?
The multisampled depth buffer contains multiple depth samples per pixel. How exactly would you want them to be reduced to a single value?

kaerimasu
10-10-2008, 11:30 AM
Should not the depth texture also be antialiased?
No. There is not much point in averaging depth values of neighbouring samples, it just doesn't produce meaningful results.


Why does it make sense to average color and not depth? Pixels that were the clear value before suddenly are part of a line with multisampling. Should they not have a depth if they're part of the line?

If I'm using a perspective projection, I agree that the average doesn't make sense. But why not for an orthogonal projection?

- Chris

arekkusu
10-10-2008, 06:40 PM
Why don't you use GL_LINE_SMOOTH, if you're only drawing lines?

Korval
10-10-2008, 10:29 PM
Why does it make sense to average color and not depth?

Because the result is not meaningful. Think about it.

If a pixel is made of 2 samples of depth 0.5, 2 samples of depth 0.6, and one sample of depth 0.2, what depth is the pixel?

It certainly isn't the weighted average of the samples. That would wind up with a depth of 0.48, which isn't the depth of anything you rendered into that pixel.

If you were to render something on your post-blit depth buffer with depth 0.49, it would overwrite the color, which makes no sense in terms of what you originally rendered. After all, if you did it to the original multisample buffer, it would write over only the 0.2 depth value; it would combine its samples with the 0.5 and 0.6 samples, making the final value a combination of samples.

In short, after doing the multisample reduction, the depth buffer cannot make any form of sense relative to what was originally rendered. So the implementation picks one of the depth values (possibly the largest?) and uses that for the entire pixel.

tamlin
10-11-2008, 02:25 AM
While a little off-topic, if the lines are only for 2D and if you got CPU to spare you could have a look at Anti-Grain (http://www.antigrain.com/).

(I've myself used it to render 2D lines to texture, and was I in for a quality increase surprise! :) )

Korval
10-11-2008, 03:08 PM
While a little off-topic, if the lines are only for 2D and if you got CPU to spare you could have a look at Anti-Grain.

Too bad it's only released under GPL, which means that you can only use it as a library in other GPL'd code. It's not even LGPL, which would let you use it as a .dll in non-GPL code.

kaerimasu
10-11-2008, 04:59 PM
Why does it make sense to average color and not depth?

Because the result is not meaningful. Think about it.

If a pixel is made of 2 samples of depth 0.5, 2 samples of depth 0.6, and one sample of depth 0.2, what depth is the pixel?


I understand all this, but my question is how does it make any more sense to average the color then? I could apply your same argument to the multisampling of color. The averaged color doesn't actually occur. It's neither the clear color, nor the line's color. The operation, however, creates an effect that we want so it seems logical to us. We're coloring pixels that aren't truly on the line in a way that represents neither line nor background. How is it then that similarly assigning these false pixels a depth is so silly?

Korval
10-11-2008, 05:12 PM
I understand all this, but my question is how does it make any more sense to average the color then? I could apply your same argument to the multisampling of color.

It makes sense because that's what you asked to do when you started the whole multisample process. It is understood that this is what you were interested in doing by even creating a multisampled color buffer.

More importantly, the value of a color does not change, for example, how things are rendered. The value of the depth buffer does. A "meaningless" color value can still look correct; a meaningless depth buffer is never correct. Even if it is what you expect, I can't imagine a circumstance where it is ever what you want. That is, I can't imagine how it would ever be useful in any way.

vs987
10-11-2008, 05:12 PM
Why does it make sense to average color and not depth?

Because the result is not meaningful. Think about it.

If a pixel is made of 2 samples of depth 0.5, 2 samples of depth 0.6, and one sample of depth 0.2, what depth is the pixel?

It certainly isn't the weighted average of the samples. That would wind up with a depth of 0.48, which isn't the depth of anything you rendered into that pixel.

If you were to render something on your post-blit depth buffer with depth 0.49, it would overwrite the color, which makes no sense in terms of what you originally rendered. After all, if you did it to the original multisample buffer, it would write over only the 0.2 depth value; it would combine its samples with the 0.5 and 0.6 samples, making the final value a combination of samples.

In short, after doing the multisample reduction, the depth buffer cannot make any form of sense relative to what was originally rendered. So the implementation picks one of the depth values (possibly the largest?) and uses that for the entire pixel.

i am interested in a possibility to get averaged depth values from a muti-sampled buffer, too.

there are some applications (especially in postprocessing effects) for averaging depth values:

i.e. for using depth-based blur for simulating scattering trough large amount of atmosphere,
sure its only an approximation, because the depth-2-blurstrength function is no homomorphism,
but still its better than the maximum minimum w/e.

so is there a way to get averaged depthsamples (except a redundant rendering of depth to an extra channel in a normal texture)?

or is there a possibilty to directly sample from the multisampled depthbuffer (and get the averaging by using GL_LINEAR filtering with 2x2 multisampling i.e. or even better to use a custom shader-based resolve tailored for the specific application)?

Korval
10-11-2008, 05:49 PM
for using depth-based blur for simulating scattering trough large amount of atmosphere, sure its only an approximation, because the depth-2-blurstrength function is no homomorphism, but still its better than the maximum minimum

Um, that doesn't work. Because the depth buffer is non-linear, an averaged value won't even correspond to the actual average depth.


so is there a way to get averaged depthsamples

No.


except a redundant rendering of depth to an extra channel in a normal texture

That wouldn't work anyway, since you can't read from a multisample texture (which is why there are no multisampled textures; only multisampled renderbuffers).

vs987
10-12-2008, 02:43 AM
Um, that doesn't work. Because the depth buffer is non-linear, an averaged value won't even correspond to the actual average depth.

i need it for a deferred shading rendering approach
cause i have a lot of near pixel-sized triangles,
which results in poor fragment shader performance,
i try to gain speed
by moving lighting and fogging calculations
to a fullscreen quad postprocessing step.
i am looking for ways to integrate multi-sampling,
cause the postprocessing operates on the downsampled buffer, there is no way to correctly determine a fogging factor for a single pixel, because it may be composed of multiple depths.
the point is,
the averaged non-linear depth can be useful,
and maybe more than the actual average depth,
for getting an approximation to the correct antialiased fogging factor.



That wouldn't work anyway, since you can't read from a multisample texture (which is why there are no multisampled textures; only multisampled renderbuffers).

i meant by rendering depth to a multisample color buffer,
and downsampling it.

btw,
dx 10.0 allows reading from a multisampled texture
dx 10.1 allows reading from a multisampled depthbuffer
i had hope there's a way in gl

A. Masserann
10-12-2008, 08:14 AM
so is there a way to get averaged depthsamples (except a redundant rendering of depth to an extra channel in a normal texture)?

or is there a possibilty to directly sample from the multisampled depthbuffer (and get the averaging by using GL_LINEAR filtering with 2x2 multisampling i.e. or even better to use a custom shader-based resolve tailored for the specific application)?

AFAIK, in DX10 yes, but not in openGL. But even in DX10 there is noting to get natively averaged depth values ( or this is something I'm not aware of ).

However, I still can't see the point of doing that... Multisampled colors are useful and _do_ make point since it produces softer, "blended" edges, but multisampled depth ?
I mean, I can see the point of getting per-sample depth values ( post effects and so on ) ; but having such a "resolve operator" seems quite weird to me.

vs987
10-12-2008, 09:50 AM
However, I still can't see the point of doing that... Multisampled colors are useful and _do_ make point since it produces softer, "blended" edges, but multisampled depth ?
I mean, I can see the point of getting per-sample depth values ( post effects and so on ) ; but having such a "resolve operator" seems quite weird to me.

only to save shadercycles, having a
fast custom resolve with a per-pixel postprocessing may be faster than
a per-sample postprocessing and then a standard resolve to a pixel.

but i found an other more general applicable approach,
that switch from per-sample postprocessing to per-pixel processing, if within a pixel the depths dont differ too much.
so the costly shader path is only applied to edges of polygons.

A. Masserann
10-12-2008, 11:45 AM
in DX10 you can get the depth gradients using ddx and ddy ; Is there something like that in openGL ?

Seth Hoffert
10-12-2008, 11:56 AM
Yes, dFdx() and dFdy().

Korval
10-12-2008, 03:09 PM
i need it for a deferred shading rendering approach

Tough. Deferred shading and multisampling are mutually exclusive.


the averaged non-linear depth can be useful,

No, it can't. It would make all of your lighting completely broken.

BTW, you don't have to manually line wrap your lines; the browser will do it for us.

zed
10-12-2008, 05:04 PM
>>>>so is there a way to get averaged depthsamples
>>No.

surely u can reverse the depth calculation
do the average
+ then calc this value in opengl 'zspace'

Though for lines only you would be better off doing the AA in a shader (the results will be far higher quality)
had a quick google
http://people.csail.mit.edu/ericchan/articles/prefilter/
I dont know how good this is though

vs987
10-13-2008, 12:25 AM
i need it for a deferred shading rendering approach

Tough. Deferred shading and multisampling are mutually exclusive.



they are not, i.e. refer to
http://ati.amd.com/developer/gdc/2008/DirectX10.1.pdf



the averaged non-linear depth can be useful,

No, it can't. It would make all of your lighting completely broken.

Such global statements without any argument dont help very much,
I already described my approach in previous posts,
I am too lazy to do it again....

Korval
10-13-2008, 12:39 AM
http://ati.amd.com/developer/gdc/2008/DirectX10.1.pdf

Then use D3D and stop complaining.


Such global statements without any argument dont help very much,

I assumed that you could figure it out for yourself. Lighting often uses the distance between the object and the light. If that distance is wrong, then the lighting computations will be wrong.

tamlin
10-13-2008, 02:14 AM
While a little off-topic, if the lines are only for 2D and if you got CPU to spare you could have a look at Anti-Grain.

Too bad it's only released under GPL

2.5 is GPL2 or later. 2.4 is under a "Modified BSD License".
http://www.antigrain.com/license/index.html

vs987
10-13-2008, 07:14 AM
http://ati.amd.com/developer/gdc/2008/DirectX10.1.pdf

Then use D3D and stop complaining.


Such global statements without any argument dont help very much,

I assumed that you could figure it out for yourself. Lighting often uses the distance between the object and the light. If that distance is wrong, then the lighting computations will be wrong.

In exactly what post i did start complaining?

And "Then use D3D" is not the answer i want to hear in a opengl forum, when asking for help.
Plz then don't answer at all, and don't waste your, mine, others time.

I only quoted a D3D reference to show you,
that there are techniques readily available to have deferred shading with multisampling.

I said the average non-linear depth may be useful for SOME applications and not ALL.
Sure for lighting it will be ugly.
But for computing a depthbased fog/scattering it may be useful,
because non-linear depth shares some properties with a fog blend factor based on distance to eye,
like a concentration of values near 1 for far distances.
or like that its monotonic (lighting is not).
so the average of the non-linear depth and then the mapping to a fog blend factor may be an acceptable
approximation to the real desired value, which is normally
computed by a per-sample transformation and then taking the average.

Ilian Dinev
10-13-2008, 09:27 AM
By the way, you can draw antialiased lines without multisampling. Just draw 2px-wide lines with a shader and alphablending. The shader puts gl_FragColor.a = distance from pixel to line. Looks fine, and even better when lines are 4px wide.
But this completely skips writing to the depth-buffer, so it might not be helpful to you.

kaerimasu
10-13-2008, 09:52 AM
I understand all this, but my question is how does it make any more sense to average the color then? I could apply your same argument to the multisampling of color.

It makes sense because that's what you asked to do when you started the whole multisample process. It is understood that this is what you were interested in doing by even creating a multisampled color buffer.


I'm sorry, but I asked also for a multisampled depth buffer. Are you trying to reinforce my point? My point of posting is that this discrepancy between treatment of buffers seems arbitrary. Certainly, averaging depth (which is linear for orthographic projections) isn't always what is wanted. But who's to say that such functionality is never useful? The GL_EXT_framebuffer_multisample extension makes no mention that depth attachments are treated differently.



More importantly, the value of a color does not change, for example, how things are rendered. The value of the depth buffer does. A "meaningless" color value can still look correct; a meaningless depth buffer is never correct. Even if it is what you expect, I can't imagine a circumstance where it is ever what you want. That is, I can't imagine how it would ever be useful in any way.


You're making an aesthetic argument here for a narrow (though primary) application of the API. OpenGL is not just used for rendering realistic 3-D scenes, so your assumption that averaging color is correct while averaging depth is incorrect may be appropriate for your applications but it is not some gold standard.

Multisampling is effectively an image space operation, so trying to assign some higher level semantic meaning is not always appropriate. Essentially, we are performing a weighted-blending of a buffer's values. I can filter the depth buffer on the CPU pretty easily, but I was hoping to antialias on the GPU.

In response to your statement that you cannot see any use for a multisampled depth buffer, it seems like this would be exactly what you'd want to slightly emboss primitives into the background.

- Chris

Ilian Dinev
10-13-2008, 11:29 AM
Multisampling is effectively an image space operation, so trying to assign some higher level semantic meaning is not always appropriate. Essentially, we are performing a weighted-blending of a buffer's values. I can filter the depth buffer on the CPU pretty easily, but I was hoping to antialias on the GPU.

Ah, then why not convert depth to color (gl_FragData[1].x=...), and do MRT? You'll get the results you want, I think.

kaerimasu
10-13-2008, 12:07 PM
Ah, then why not convert depth to color (gl_FragData[1].x=...), and do MRT? You'll get the results you want, I think.


Good idea. I'll try it out. Thanks.

- Chris

bertgp
10-14-2008, 06:57 AM
Ah, then why not convert depth to color (gl_FragData[1].x=...), and do MRT? You'll get the results you want, I think.


Good idea. I'll try it out. Thanks.

- Chris

Keep in mind that if you have a lot of overdraw, you might still want to keep a "regular" depth buffer. This way you can benefit from the early-Z culling. This may or may not make a difference depending on your pixel shader complexity, platform, etc.