Depth to Color + multisampling?

I’m trying to render my depth buffer to my color buffer for a compositing effect, (briefly described here,
http://www.opengl.org/registry/specs/NV/copy_depth_to_color.txt ), however, the edges are jagged.
So, I would like to use multisampling too, but when I turn it on, the result is not what I expected.
The edges are smooth, but I don’t want the extra garbage in the center.

I’m using glReadPixels to read my zbuffer, then using glDrawPixels to render it.


 glEnable(GL_MULTISAMPLE_ARB);
 RenderScene();
 glDisable(GL_MULTISAMPLE_ARB);

 glReadPixels( 0,0, w,h, GL_DEPTH_COMPONENT, GL_FLOAT, depth_buffer );

 // in ortho mode
 glDrawPixels( w,h, GL_DEPTH_COMPONENT, GL_FLOAT, depth_buffer );

Is this the proper way to get an (anti-aliased) composite outline of the rendered scene?
glDepthFunc() is always set to GL_LEQUAL, and GL_DEPTH_TEST is enabled.

Any suggestions?

Ok, I take it from the lack of responses that it is either impossible to do or no one is willing to provide example source code.
Oh well.

Not quite. There are a few logic holes in your original post and I think most people (like me) were just confused.

First, if your intent is performance, this is a horrible way to go about doing what you’re trying to do. You’re rendering the scene, reading back the entire depth map to the CPU, then pushing it completely back to the GPU. A better approach would be to render to FBO, and then copy/render to FB. That way it all stays on the GPU.

Second, if you just want a silhouette like the first image, what are you using the depth buffer for anyway?

Third, AFAIK there is no explicitly defined algorithm for doing depth resolve for MSAA. For instance, if you have multisampled depths of 0.5 0.9 0.1 and 0.0, what exactly do you expect to get back for the depth of that pixel? Unfortunately, you can’t control it.

Fourth, I cannot infer how you are getting the first image and the second image if we assume the same GL state and program is being used to render the shape, other than having FSAA and multisample rasterization disabled of course.

So, what effect specifically are you trying to achieve? And what specifically puzzles you about the results you’re getting? If you state that, bet lots of folks can help you out. Feel free to post a short test program.

Ok, let me make an analogy.

Anybody here use 3dsmax? When you render out the scene to a bitmap, the dialog window gives you an option
to preview the RGB and alpha channels. The alpha channel is what I want, using OpenGL of course.

But OpenGL doesn’t create this alpha channel for you. That’s why I using the depth buffer. The technique
is called zbuffer compositing. Call it silhouetting if you want.

And my problem is that OpenGL doesn’t give you an anti-aliased ‘silhouette’. Turning on OpenGL multi-sampling doesn’t help either.

And I’m not concerned about optimizations. I would just be happy to get it to work.

OpenGL will create an alpha channel. You need to request it in your pixel format.

Having alpha-channel and MSAA, just make your shaders all output gl_FragColor.w=1;

Otherwise, if you want to handle alpha-blended triangles, I suggest just doing this:
clear scene into red color, render scene and keep the rendered texture; then clear scene into green color, again render scene. Then create a shader to compute the color and alpha-channel out of the differences between the 2 frames (some math required). This is like advanced color-to-alpha decompositing.

OpenGL will create an alpha channel. You need to request it in your pixel format.

Thanks, but that was just an example. I want the silhouette image, not an alpha channel. 3dsmax just happens to put
it in an alpha channel to prepare it for bitmap export.

Otherwise, if you want to handle alpha-blended triangles, I suggest just doing this:
clear scene into red color, render scene and keep the rendered texture; then clear scene into green color, again render scene. Then create a shader to compute the color and alpha-channel out of the differences between the 2 frames (some math required). This is like advanced color-to-alpha decompositing.

Actually, how about if I just render the entire scene in white with lighting turned off?
That’ll give me color with anti-aliasing. Problem solved!

Yup, that was my first suggestion, kind of.
But it won’t handle transparent objects :stuck_out_tongue:

I don’t know how to handle transparent objects either.
Maybe need to use a raytracer?

But luckily, my scene doesn’t use transparency, so I’m not worried.

I told you how to handle the transparency :). Render with different background-colors twice, and decompose. Somewhat like how movies use green background for CGI compositing, but reversed and with higher precision.

If you could point me to webpage explaining this color-to-alpha decompositing technique in detail, then I would
be grateful.

Well, haven’t seen any page/info about this, but it’s a bit straightforward:

(let’s assume you draw the scene once over black background, then once again over white background)



/*
* this are just sketch-notes on how I made the function, after doing the maths. 

color1=vec3(0,0,0); // black background 
color2=vec3(1,1,1); // white background


unknown = vec4(0.1,0.2,0.3, 0.5); //alpha=0.5


xcolor1 = 0.05,0.1,0.15   = unknown*alpha + vec3(0,0,0)*(1-alpha) = unknown*alpha + vec3(0,0,0) - vec3(0,0,0)*alpha = unknown*alpha
xcolor2 = 0.55,0.6,0.65   = unknown*alpha + vec3(1,1,1)*(1-alpha) = unknown*alpha + vec3(1,1,1) - vec3(1,1,1)*alpha = unknown*alpha + vec3(1-alpha)


xcolor2-xcolor1 = vec3(1-alpha)
=> alpha = 1-(xcolor2.r-xcolor1.r)


xcolor1 = unknown*alpha 
=>  unknown = xcolor1/alpha // ouch, this division might
 // be problematic... but not actually, as
 // when alpha<eta, xcolor1<eta
*/

vec4 decompose(vec3 xcolor1,vec3 xcolor2){
	// xcolor1 is a pixel at the current position, of the scene rendered with BLACK background color
	// xcolor2 is a pixel at the current position, of the scene rendered with WHITE background color
	
	vec3 alpha = vec3(1,1,1) - xcolor2 + xcolor1;
	
	vec3 unknown = xcolor1/(alpha.x+0.001);
	
	vec4 result = vec4(unknown,alpha.x);
	return result;
}

uniform sampler2D sceneBlack,sceneWhite;
varying vec2 coord;

void main(){
	vec3 xcolor1 = texture2D(sceneBlack,coord).xyz;
	vec3 xcolor2 = texture2D(sceneWhite,coord).xyz;
	
	gl_FragColor = decompose(xcolor1,xcolor2);
}

I did a small test to see if 3dsmax does transparency correctly.

I tried rendering a red plane (1,0,0) in 3dsmax that was 50% transparent,
and the alpha channel came out to be gray (.5,.5,.5), while the RGB channel came out as expected, (.5,0,0).

Now, already I know, if I import this bitmap w/alpha into Photoshop, and drop in a black background,
the final color is going to reduce to half of that (.25,0,0).

So, does this mean 3dsmax exported the scene incorrectly?

Thanks, I tried out your equation.

Let’s say I rendered a red plane (1,0,0) that is 50% transparent against a black background.
You could also render a red plane (.5,0,0) that is 100% opaque against a black background.

Both would look identical.

I could have either (1,0,0,.5) or (.5,0,0,1).

Problem is, I think maybe your equation might make something transparent when it shouldn’t be?
Such as the above cases.


TEST1:
red plane (1,0,0)
50% transparent

If I rendered the scene normally against a black background, the color would be (.5,0,0).
If I rendered the scene as white, the color would be (.5,.5,.5).

alpha   = (1,1,1) - (.5,.5,.5) + (.5,0,0) = (1,.5,.5)

unknown = (.5,0,0) / (1+.001) = (.499,0,0)
result  = (.499,0,0,1)

TEST2:
red plane(.5,0,0)
100 %opaque

If I rendered the scene normally against a black background, the color would be (.5,0,0).
If I rendered the scene as white, the color would be (1,1,1).

alpha   = (1,1,1) - (1,1,1) + (.5,0,0) = (.5,0,0)

unknown = (.5,0,0) / (.5+.001) = (.998,0,0)
result  = (.998,0,0,.5)

That’s exactly why you need 2 renders - one on black background, and another on white background.

You just miscalculated xcolor2 in test1


case 1: rendering a 50% transparent red plane:
    xcolor1 = mix(vec3(1,0,0),black,0.5) = vec3(0.5,0,0) 
    xcolor2 = mix(vec3(1,0,0),white,0.5) = vec3(1,0.5,0.5) ** this is the correct value, not vec3(0.5,0.5,0.5)
    ... in decompose():
    vec3 alpha = vec3(1,1,1) - xcolor2 + xcolor1;
    => alpha = 0.5
    => resultRGB = xcolor1/(alpha+0.001) = vec3(1,0,0);
case 2: rendering an opaque plane with color 0.5,0,0:
    xcolor1 = mix(vec3(0.5,0,0),black,1) = vec3(0.5,0,0)
    xcolor2 = mix(vec3(0.5,0,0),white,1) = vec3(0.5,0,0)
    ; notice how xcolor1==xcolor2
    ... in decompose():
    vec3 alpha = vec3(1,1,1) - xcolor2 + xcolor1;
    => alpha = 1

That 3DSMax rendered image is overlayed on black background, and the image-preview window doesn’t discard the alpha-channel.I bet if you export the RGBA image to photoshop, you’ll see that actually the color is correctly vec4(1,0,0,0.5).

That 3DSMax rendered image is overlayed on black background, and the image-preview window doesn’t discard the alpha-channel.I bet if you export the RGBA image to photoshop, you’ll see that actually the color is correctly vec4(1,0,0,0.5).

Thanks, got it working! I had to export it as 32-bit targa. It came out correctly as full red.
Tried .png before, but something was wrong about it.