tex matrix to eye space

so in a Cg fragment program i want to convert a point (x,y depth) where x and y are tex coords in the range [0,1] to eye space (coords relative to the location/orientation of the camera). to do this, i make my texture matrix:
texMat = biasMatrix * cameraProjMatrix
and then i invert it.
so, when i take my point (x tex coord, y tex coord, depth) and multiply it by the inverted texture matrix, shouldn’t this give me the eye coordinates? something isn’t working here =(

This depends on where your coordinates are in the first place.

The matrix transforms from one space to another, so the key question is what are your coords to begin with.

Typically a texgen could be used directly to give you eye space coordinates without transforming through any matrix.

Loading the projection matrix on the stack would then divide the x & y by z to give you screen space coordinates not eye space. (after homogeneous divide)

If you’re using ARB_fp or NV_fp, why not just use fragment position directly? You can transform {x,y,z,1} where {x,y,z} = fragment.position.xyz by the inverse of the concatenated projection_viewportdepthrange matrix.

The result will be a homogeneous vector in eye space. In general w != 1.0, so you’ll need to divide by w to get a non-homogeneous point.

I’ve given this same answer on a different thread. Is this a duplicate post, or is there just all of a sudden lots of interest in unprojecting fragment position into eye space position?

Cass

ok, so i’m rendering the scene from the point of view of the light. and i want the position of the fragments in the light’s eye space. so in the fragment program, i am using the wpos (window position) semantic to get the (x,y,depth) triplet of each fragment. then i am dividing the x and y values by the screen dimensions to get an (x tex coord, y tex coord, depth) triplet. next i’m multiplying by my inverted texture matrix. the texture matrix just consists of: biasMatrix * lightProjMatrix.

cass,
hey, i just found the other thread… so there must just be a lot of interest =) i guess i don’t entirely understand your response. you say to multiply the fragment position by the inverse of the concatenated projection_viewportdepthrange matrix – what exactly is the viewportdepthrange matrix? you don’t mean the view transform, do you, as that would take the point back to worldspace.

Originally posted by lost hope:
cass,
hey, i just found the other thread… so there must just be a lot of interest =) i guess i don’t entirely understand your response. you say to multiply the fragment position by the inverse of the concatenated projection_viewportdepthrange matrix – what exactly is the viewportdepthrange matrix? you don’t mean the view transform, do you, as that would take the point back to worldspace.

the “viewportdepthrange” matrix is just the scale/bias matrix that usually converts
x: [-1,1] -> [0,w]
y: [-1,1] -> [0,h]
z: [-1,1] -> [0,1]

The actual scale/bias mappings depend on the current glViewport() state and glDepthRange() state, but ususally the viewport is for the whole window and the depth range is the full [0,1].

Does that make sense? It’s essentially the clip space to window space transform.

Thanks -
Cass

hey cass,
yeah, that makes sense. and in fact, i believe i’m already doing that. my matrices are as follows:

biasMatrix =

0.5000         0         0    0.5000
     0    0.5000         0    0.5000
     0         0    0.5000    0.5000
     0         0         0    1.0000

lightProjectionMatrix =

1.7321         0         0         0
     0    1.7321         0         0
     0         0   -1.5000 -250.0000
     0         0   -1.0000         0

texMatrix = biasMatrix * lightProjection Matrix

texMatrix =

0.8660         0   -0.5000         0
     0    0.8660   -0.5000         0
     0         0   -1.2500 -125.0000
     0         0   -1.0000         0

invertedTexMatrix = inv(texMatrix)

invertedTexMatrix =

1.1547         0         0   -0.5774
     0    1.1547         0   -0.5774
     0         0         0   -1.0000
     0         0   -0.0080    0.0100

so the final “invertedTexMatrix” is what i’m multiplying my positions by (x fragment screenCoord/screenWidth, y fragment screenCoord/screenHeight, z fragment depth in range 0-1, 1). but as you can see by looking at the invertedTexMatrix, all my z values will come out to -1 (since the 3rd row is 0, 0, 0, -1), and even after i divide by w, i’m still getting all negative z values… which doesn’t make sense, as in the light’s eye space, all the points are in front of the light?

[This message has been edited by lost hope (edited 12-02-2003).]

also, just for the sake of clarity, this is the Cg code i’m using for finding the eye space depth:

//----------------------------------------------------------------
// GENERATE THE DISTANCE TO THE RECEIVER
//----------------------------------------------------------------
// Grab the value [0,1] from the shadow map
float2 texCoord = float2(windowPos.x / screenWidth, windowPos.y / screenHeight);
zShdwMap = tex2D (shadowMap, texCoord);

// Determine the eye space location of the receiver by multiplying
// the texture space location by the inverted texture matrix.
float4 txtSpacePos = float4(texCoord.x, texCoord.y, zShdwMap.x, 1);
float4 eyeSpacePos = mul (invertedTexMat, txtSpacePos);

// Determine Z distance to eye space point.
if (eyeSpacePos.w == 0) ++error;
else receiverDepth = eyeSpacePos.z / eyeSpacePos.w;
//----------------------------------------------------------------

the receiverDepth is always coming out negative, even though the points are in front of the light.

[This message has been edited by lost hope (edited 12-02-2003).]