PDA

View Full Version : Eye space position from depth texture

bobsyouruncle
12-17-2006, 04:30 PM
Hello, I have a depth buffer from which I would like to compute the eye-space position that the depth buffer represents in in a glsl fragment shader. Does anybody know of a good explaination or demo that describes how to do this?

nom
12-19-2006, 06:13 AM
you need to compute inside fragment shader or you can store it as variable that shader uses?

i would compute it outside the shader and use it as variable.

if you can use regular gl, you can simply read 3 points from depth buffer, from far corners, unproject them which will give you 3 points in space. not sure, but i think that projecting back can give you distance from near clipping plane (z value). since then you have 3 points in space and distance from eye to these 3 points, triangulation could do the work. just this may not work fast, if you need to know this in each frame.

Jackis
12-19-2006, 07:19 AM
bobsyouruncle

the math is rather simple, if you know, how projection matrix is defined, and how perspective division works. The simpliest solution - is to multiply back by the inversed projection matrix, doing perspective division before it. But it is more robust way, I hope. (btw, shaders are Cg, not glsl ones)

I don't want to explain in whole details right now, sorry. I can give it later, if needed, but it is very simple.

First, you need to obtain real depth from normalized device pixel depth.

// firstly, expand normalized device depth
// DepthMap - rectangular screen depth texture
// inScrPos - WPOS semantics, XY - pixel viewport coords
float storedDepth = f1texRECT(DepthMap, inScrPos).x;

// get real depth, in meters
// NearFarSettings: X - far, Y - (far-near), Z - far*near
float realDepth = NearFarSettings.z / (NearFarSettings.x - storedDepth * NearFarSettings.y);Then, with known pixel position with it's already converted depth, you need to reconstruct eye space coordinates (by the way, no difference, if you want reconstruct object or world coordinates)

// LeftDown: left-down corner (-tan_h, -tan_v, 1) (-1 is here for depth multiplication)
// ToRightUp: to right-up corner, divided by viewport sizes (2*tan_h/size_x, 2*tan_v/size_y, 0)

// calculating eye-space vector position
float3 eyeSpaceVec = (LeftDown + inScrPos * ToRightUp) * realDepth;

bobsyouruncle
12-19-2006, 10:37 PM
Thank you Jackis.

I still haven't figured out what was wrong with my calculations. From a cursory look, it seems it is equivalent except I was generating coordinates based on the center of the screen. (Which used more ops anyway).

I am now able to verify the unprojection of these points by recalculating their depths and displaying that.

The only problem now is that my lights are swimming around as I rotate left and right, but I can work out that one myself.

Thanks again Jackis.

Bob

Jackis
12-20-2006, 12:23 AM
You are welcome.

By the way, I have a little misprint there - in 'LeftDown' Z component must be equal to -1, I forgot about GL eye-space Z axis is looking back.
But the methodics is rather clear I hope.

And I also considered, that depth range is default - from 0 to 1. Another depth range surely might change reconstruction formula.

May be, you post some screenshots with your problems, so we can see what's wrong.

zed
12-27-2006, 11:11 PM
ive tried the above method + the shader doesnt like it (it looks like it works though)
but im seeing major precision errors
im using near = 1.0meters + far = 1000.0m with depthtexture24
the ppl are in the range 20 - 120 meters distance

http://www.zedzeek.com/junk/preciscion_error.jpg
btw note the optical illusion machbanding there (the greybands are in fact the same color but look lighter at the bottom)

Jackis
12-28-2006, 04:46 AM
Seems, you didn't specify depth texture correctly. I mean, you have to disable any filtering and set appropriate addressing in order to not to fall back in 8 bpp mode, when sampling from it.

Make sure, you set minification and magnification filters to GL_NEAREST and wrapping mode to GL_CLAMP_TO_EDGE (the last is not obligatory, but just for the case :)

Banding must go away.

zed
12-28-2006, 08:29 AM
you were correct, thanks
there should be a big table of gotchas eg trying to display a depthtexture with GL_COMPARE_TO_R set.
i thought all filters of the textures attached to a FBO had to be the same but it seems to work with depth (nearest) + color (linear)

GL_LINEAR works well as a depthtexture (with depth comparrisons eg shadowmaps) it only drops quality when u render the depth texture, i assume this is something to do with PCF

Jackis
12-29-2006, 12:50 AM
I think, these filtering problems refer to specifications of SGIX_depth_texture and so on. But I can't tell it exactly, because quick glance through specification didn't answer that question.
But AFAIK depth texture filtering is quite limited through simple tex lookup, especially on FX hardware, where it is impossible, as far as I know.

zed
12-29-2006, 10:37 AM
im pretty sure the gffx does support depth filtering at least i remember it to
another thing that got me was the following ( as an example )

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE );

now the first parameter can be GL_TEXTURE_2D, 1D, 3D,
GL_TEXTURE_RECTANGLE_ARB
since i was swapping between 2d + rectangle textures i was forgetting to change the first paramater, so the texture wouldnt display.
personally i dont see why u need to specify it.
i believe the following should be used
glTexParameteri( GL_TEXTURE_COMPARE_MODE_ARB, GL_COMPARE_R_TO_TEXTURE );
which operates on the currewnt texture whatever it is (1d,2d etc)

Brolingstanz
12-31-2006, 12:54 AM
personally i dont see why u need to specify it.I think it's because you can have all texture targets enabled at once; so there's an ambiguity of sorts if it's not specified explicitly.

zed
12-31-2006, 09:55 AM
thats another thing that should go as well, 3d -> cubemap -> texrects -> 2d -> 1d
why do u want a 3dtexture + 2dtexture bound to the same unit?
hopefully this added unecessary complexity will be gone in gl3.0

Vaticanfox
01-09-2007, 06:57 AM
Hi, I have a quick question that is somewhat related to this topic. Can you access a fragment's depth value inside the fragment shader? I'm using Cg, and I see the WPOS binding semantic and am wondering if the z value of that would be the fragments normalize depth value?

Jackis
01-09-2007, 11:25 PM
Yes, WPOS.z is normalized device depth.