Depth comparison

Hi,

I’m experiencing trouble with depth render targets. It seems I’m not the only one, and the shadow and shadow_funcs extensions specs belong to the less comprehensive specs/docs I’ve ever read.

I’m not looking for a ready-to-use answer, I would just like some explanations :slight_smile:

I render to a FBO with a depth texture, and with another read-only, previously-computed, depth texture ( which is bound to the shader as a sampler2DShadow uniform )

In the shader I would like to discard the fragment if the texture depth value is less than the fragment’s z ( as in depth peeling )

I’ve understood that it can’t be done directly; I have to use comparison functions. So here comes the question : how do I setup these comparison fonctions ? here is my code :


    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE_ARB);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC_ARB, GL_LEQUAL);

But I don’t understand how I shoud use this in the shader. What will return shadow2D() ? What am I suppose to put in the R texture coordinate ? Why does shadow2D() returns a vec4 and not a single float ?

Thanks for your explanations :slight_smile:

I don’t know why the shadow2D returns a vec4 instead of a float.
But the R coordinate is the fragment z which will be compared
with the z value sampled in the shadow map at position (x,y).

See section “Depth texture comparison mode”, page 190 of the
2.1 spec:

http://opengl.org/documentation/specs/


uniform sampler2DShadow shadowTex;

uniform float invWidth; // 1.0/width (shadow texture size)
uniform float invHeight; // 1.0/height (shadow texture size)

uniform float offsetX; // viewport lower left corner (int)
uniform float offsetY; // viewport lower left corner (int)

void main()
{
  vec3 r0;
  r0.x=(gl_FragCoord.x-offsetX)*invWidth;
  r0.y=(gl_FragCoord.y-offsetY)*invHeight;
  r0.z=gl_FragCoord.z;
  r0.x=shadow2D(shadowTex,r0.xyz).x; // returns 0 or 1.
  
  if((r0.x-0.5)<0.0) // safe
    {
    discard;
    }
 [...]
}

By the way, can someone explain me why GLSL 1.3 (OpenGL 3.0) still uses shadow maps? Can’t we do the same thing by loading
a depth texture, sampling it at x,y and comparing its value with
whatever we want in a shader?

Shadow map has some fixed-pipeline flavor to me.

thanks for the pointer.

Is that still correct today ? I have quite an old card so I can’t really test, but I could maybe test on a quadro 1700 this week-end.
However, I created my depth texture2D with a GL_DEPTH_COMPONENT24_ARB internal format, and it seems to be working ( partially ), so I don’t quite understand why Humus says I couldn’t do that.

Anyway, I still don’t manage to compare my depth properly.


	if(shadow2D(z_near_tex, vec3(gl_FragCoord.x/512.0, gl_FragCoord.y/512.0,gl_FragCoord.z) ).r  > 0.5) {
		gl_FragColor = vec4(1,0,0,1) ;
	}else{
		gl_FragColor = vec4(0,0,1,1) ;
	}


    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_ALWAYS);

… and it doesn’t change anything to replace GL_ALWAYS by GL_NEVER : everything stays blue.

By the way, can someone explain me why GLSL 1.3 (OpenGL 3.0) still uses shadow maps? Can’t we do the same thing by loading
a depth texture, sampling it at x,y and comparing its value with
whatever we want in a shader?

Did you happen to notice that glslang also has a built-in function to do linear interpolation? And a built-in function to reflect vectors? And so on?

The reason is 2-fold.

1: It gives the compiler a better chance of optimizing it. It’s much easier to tell that someone is doing a linear interpolation when they just call the function for it than when you have to figure it out from a parse tree.

2: It allows the compiler a better chance of using hardware for complicated operations. On nVidia hardware (if not ATi), shadow compare accesses are done in the texture unit, not in compiled code. How are you the user going to know the right sequence of texture accesses and compares that the driver will detect and magically turn into the right set of hardware state. No, it’s better all around to just have a function (or in this case, a sampler).

Also, the shadow accessors don’t do what you described. By the OpenGL specification, they are required to do percentage-closest filtering (or something reasonably close to it). That is, comparing with the 4 texels around you and returning the average on a 0 to 1 scale.

Yes, you can replicate that in code. But see #1 and #2 above.

Just because it’s fixed function doesn’t make it bad.

Thanks for the explanation Korval.

The shadow accessors do what I described if the texture mag and min are set to nearest. But thanks for pointing this out because I actually didn’t know about the linear case with shadow maps.

I just found the relevant section in Spec 2.1 page 191:

“If the value of TEXTURE MAG FILTER is not NEAREST, or the value of TEXTURE MIN FILTER is not NEAREST or NEAREST MIPMAP NEAREST, then r may be computed by comparing more than one depth texture value to the texture R coordinate. The details of this are implementation-dependent, but r should be a value in the range [0, 1] which is proportional to the number of comparison passes or failures.”

To A. Masserann:

Make sure you actually set the values of the texture mag and texture min parameters.

He didn’t say you couldn’t do that. He was talking about float textures.

… and it doesn’t change anything to replace GL_ALWAYS by GL_NEVER : everything stays blue.

That could happen if your texture is incomplete or disabled. Make sure the MIN filter for the texture is set to not use mipmaps.

shadow2D() returns a vec4 to replicate the effect of setting DEPTH_TEXTURE_MODE_ARB. On hardware that actually uses vec4 ALUs and registers this might make some sense. Note that in GLSL 1.3 shadow1D|2D and texture1D|2D|3D|Cube have been deprecated in favour of a generic texture() function which returns a float if passed a sampler2Dshadow.