I defined a pbuffer to contain depth values for a rendered scene, this pbuffer is bound as texture, but how do I access the depth values within my fragment program.
I read something about a comparison value in the q coordinate of the vec3 that accesses the sampler but all I want to do is to read the depth value from my texture.
If you just want to read the depth value, it’s probably safer to use texture2D instead of shadow2D. shadow2D reads the depth value like texture2D, but then also does a depth comparison.
If you don’t have your shadow texture set up for depth comparisons, then I think shadow2D will behave just like texture2D by not doing the comparison. However, this is not a clean way to do things and I don’t know if you can expect all drivers to behave this way.
I haven’t read anything that says GL_DEPTH_COMPONENT textures must be accessed with samplerDShadow. samplerD should also work. There’s one good way to find out, though.
A depth texture is stored as one channel, but that same value is duplicated into all four channels when you read it into your shader. At least, that’s what I have observed. Again, just try it and see what you get.
mogumbo is correct, when you read from a depth texture, the r, g, b, a (or x, y, z, w, etc) components will all be set to the same value.
The value itself will lie in the range 0 to 1, with 0 being the near clip plane, and 1 the far clip plane, but note that the value is non-linear. If you want to convert it to a true scene depth value, you’ll need to pull a few values out of the projection matrix and do some calculations.
my idea was to use the depth value in conjunction with some point in viewport coordinates (x,y) to form (x,y,depth,1) and transform them back to world space
Originally posted by powerpad: my idea was to use the depth value in conjunction with some point in viewport coordinates (x,y) to form (x,y,depth,1) and transform them back to world space
That’s a better (amd more generic) way of putting it
What I was trying to say is that if you don’t need the world x/y coordinates, you can transform just the Z, and for that you just need a two values from the projection matrix.
Either way, if I remember correctly, the Z value ends up in the range -1 and 1 after being multiplied by the projection matrix (anything else is outside the near and far clipping planes). This value is then mapped to the range 0 - 1 before it’s stored in the depth buffer. So it you read a depth buffer value, you will need to do a
z = z * 2 - 1
before you reproject it back to world coordinates.