sampler2DShadow

I defined a pbuffer to contain depth values for a rendered scene, this pbuffer is bound as texture, but how do I access the depth values within my fragment program.

I read something about a comparison value in the q coordinate of the vec3 that accesses the sampler but all I want to do is to read the depth value from my texture.

How do I do this, this is my attempt …

uniform sampler2DShadow depthMap;

void main()
{
	vec4 col = shadow2D(depthMap, vec3(gl_FragCoord.x, gl_FragCoord.y, 0.0));
	gl_FragColor = vec4(col.x, col.y, col.z, 1.0);
	gl_FragDepth = gl_FragCoord.z;
}

If you just want to read the depth value, it’s probably safer to use texture2D instead of shadow2D. shadow2D reads the depth value like texture2D, but then also does a depth comparison.

If you don’t have your shadow texture set up for depth comparisons, then I think shadow2D will behave just like texture2D by not doing the comparison. However, this is not a clean way to do things and I don’t know if you can expect all drivers to behave this way.

So you mean it is OK if I use sampler2D instead of sampler2DShadow.

And how is the depth value stored - every color component contains the same depth value or how?

thanks

I haven’t read anything that says GL_DEPTH_COMPONENT textures must be accessed with samplerDShadow. samplerD should also work. There’s one good way to find out, though.

A depth texture is stored as one channel, but that same value is duplicated into all four channels when you read it into your shader. At least, that’s what I have observed. Again, just try it and see what you get.

thank you again I will try and see

mogumbo is correct, when you read from a depth texture, the r, g, b, a (or x, y, z, w, etc) components will all be set to the same value.
The value itself will lie in the range 0 to 1, with 0 being the near clip plane, and 1 the far clip plane, but note that the value is non-linear. If you want to convert it to a true scene depth value, you’ll need to pull a few values out of the projection matrix and do some calculations.

Peter

what do you mean by that ?

The first sentance, or the second sentance?

Peter

:smiley: the second one the transformations …

my idea was to use the depth value in conjunction with some point in viewport coordinates (x,y) to form (x,y,depth,1) and transform them back to world space

Originally posted by powerpad:
my idea was to use the depth value in conjunction with some point in viewport coordinates (x,y) to form (x,y,depth,1) and transform them back to world space
That’s a better (amd more generic) way of putting it :slight_smile:

What I was trying to say is that if you don’t need the world x/y coordinates, you can transform just the Z, and for that you just need a two values from the projection matrix.

Either way, if I remember correctly, the Z value ends up in the range -1 and 1 after being multiplied by the projection matrix (anything else is outside the near and far clipping planes). This value is then mapped to the range 0 - 1 before it’s stored in the depth buffer. So it you read a depth buffer value, you will need to do a

z = z * 2 - 1

before you reproject it back to world coordinates.

Peter

I do remember something similar (the transformation that does the viewport mapping does something like this this)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.