eye vector with ARBfp

I need to get the eye vector in my fp.

Thus i thought i can use fragment.position
added with an offset (ex: {-320.0, -240.0, 0.0, 0.0 }) to get the position centered.

code:

MOV vdir,fragment.position;
ADD vdir,vdir,center;
DP3 dot,vdir,vdir;
RSQ dot,dot.x;
MUL vdir,vdir,dot;

to get finally a normalised eye vector.
is this correct or is there another way? :slight_smile:

Ozzy before normalizing it, you need to unproject it …

the way I do is funny :

VERTEX PROGRAM

# Compute unnormalized eye vector, and output its norm in .w
PARAM inEyePosition = program.env[10] ;
OUTPUT outEyeVector = result.texcoord[3] ;
PARAM WorldTransform0 = program.env[0] ;
PARAM WorldTransform1 = program.env[1] ;
PARAM WorldTransform2 = program.env[2] ;
TEMP    tmp ;
DP3     tmp.x, WorldTransform0, inPosition ;
DP3     tmp.y, WorldTransform1, inPosition ;
DP3     tmp.z, WorldTransform2, inPosition ;
ADD     tmp, inEyePosition, -tmp ;
DP3     tmp.w, tmp, tmp ;
RSQ     tmp.w, tmp.w ;
MOV     outEyeVector, tmp ;

All the computations take place in world space

FRAGMENT PROGRAM

ATTRIB inEyeVector = fragment.texcoord[3] ;
TEMP    R1 ;
MUL     R1, inEyeVector, inEyeVector.w ;

This way you get an interpolated and almost normalized eye vector in the fragment program. If you want it better, you can simply normalize it in the fragment program, but it’s 2 more instructions I couldn’t afford.

My own little trick, FWIW …

SeskaPeel.

Thx for the world space stuff but i thought that using camera space was sufficient enough to get a unit vector from the camera to the fragment?

You need to transform the window-space position (x,y,z,w), where x,y,z come from fragment.position, and w == 1.

The matrix you need to transform this 4-vector by is the inverse of the concatenated projection_viewportdepthrange matrix.

The result will be a homogeneous 4-vector in eye-space with w != 1.0 in general. So you’ll need to divide by w if you want a non-homogeneous position. Since you just want an eye vector, you can simply normalize the resulting (x,y,z) without worrying about w.

Hope this helps -
Cass

Use 16-bit-per-channel fixedpoint normalisation cube map. Texture size doesn’t need to be very big, thanks to filtering in 16-bits. In VP compute texcoord=vertex_position-eye_position, and then simply sample the cube map in FP.

for nVidia: use signed HILO format
for ATi: use LUMINANCE16_ALPHA16 (+manually derive z component) or GL_RGB16 format

Thx SeskaPile and MZ but i would like to compute this vector inside the fp only, i would prefer to avoid lookup tables and vp. (whenever it’s possible).

Originally posted by cass:

The matrix you need to transform this 4-vector by is the inverse of the concatenated projection_viewportdepthrange matrix.

window-space yep(sorry).
can i do all of this into an ARBfp?
Maybe using ‘Matrix Property Bindings’ described into the specs?

[This message has been edited by Ozzy (edited 12-02-2003).]

Ozzy, maybe you could tell us what you intend to do with your eye vector ?

Well, we are sending rays from the fragments of a bounding box inside a texture3D for volume rendering. It works pretty well but the unit-eye vector we need which will give the vdir needed to cast inside the texture looks crappy at the moment. ^^

you simply need to implement gluUnproject into your fp, as cass said

Originally posted by tellaman:
you simply need to implement gluUnproject into your fp, as cass said

not that easy for me, but i will try do it ;-/

What about fragment.position coords, are they already normalised [-1,1] or do i need to do it myself using viewport infos?

Originally posted by Ozzy:
What about fragment.position coords, are they already normalised [-1,1] or do i need to do it myself using viewport infos?

No, they’re in window coordinates, so you need to generate the matrix that does the biasing, multiply it with the projection matrix, then invert that.

Piece of cake, as long as you have a simple linear algebra helper library.

Cass

i must have missed something in your explanation cass. :-/

I have tried different ways:

  1. local parameters : inv(modelview * projection).
  2. matrix bindings as state.matrix.projection.inverse.

both gives different results as far i can see but not the right vector i need for the scan inside the 3Dtexture.

In the code below, fragment.position is normalised then transformed by (1) or (2) normalised and finally scaled to texture w,h,d (here a 646464).

-> code:

TEMP texCoords;
TEMP res;
TEMP empty;
TEMP hit;
TEMP realPos;
TEMP foundPos;
TEMP vdir;
TEMP vdirx;
TEMP vdiry;
TEMP vdirz;
TEMP vdirw;
TEMP vdirTex;
TEMP dot;
PARAM startCoords = { 1.0, 1.0, 1.0, 1.0};
PARAM hitDone = { -1.0, -1.0, -1.0, -1.0 };
PARAM vide = { 0.0, 0.0, 0.0, 0.0 };
PARAM viewport={0.0015625,0.00209,1.0,1.0};
PARAM two={2.0,2.0,2.0,1.0};
PARAM texSize = {0.015,0.015,0.015,0.015};
PARAM zRange = {-0.01, -0.01, 0.01, -0.01};
PARAM invProjM0= program.local[0];
PARAM invProjM1= program.local[1];
PARAM invProjM2= program.local[2];
PARAM invProjM3= program.local[3];
#PARAM invProjM0= state.matrix.projection.inverse.row[0];
#PARAM invProjM1= state.matrix.projection.inverse.row[1];
#PARAM invProjM2= state.matrix.projection.inverse.row[2];
#PARAM invProjM3= state.matrix.projection.inverse.row[3];
ATTRIB tex0 = fragment.texcoord[0];
OUTPUT out = result.color;
OUTPUT zout = result.depth;

MOV vdir,fragment.position;
#normalised window coords…
MUL vdir,vdir,two;
MUL vdir,vdir,viewport;
ADD vdir,vdir,hitDone;

#vector tranform by matrix…
SWZ vdirx,vdir,x,x,x,x;
SWZ vdiry,vdir,y,y,y,y;
SWZ vdirz,vdir,z,z,z,z;
MOV vdirw,startCoords;
MUL vdirx,vdirx,invProjM0;
MUL vdiry,vdiry,invProjM1;
MUL vdirz,vdirz,invProjM2;
MUL vdirw,vdirw,invProjM3;
ADD vdir,vdirx,vdiry;
ADD vdir,vdir,vdirz;
ADD vdir,vdir,vdirw;

#normalisation…
DP3 dot,vdir,vdir;
RSQ dot,dot.x;
MUL vdir,vdir,dot;
MUL vdirTex,vdir,texSize;
MUL vdir,vdir,zRange;

#init scan3d.
MOV hit,hitDone;
MOV empty,vide;
MOV texCoords,tex0;

#start scan iteration0
TEX temp, texCoords, texture[0], 3D;
CMP res, hit, temp, res;
SLT hit,empty,res;
ADD hit,hitDone,hit;
ADD texCoords,vdirTex,texCoords;

etc…

/* oops modified wrong texture coords below*/
Note that the bounding box corners (s,t,r) correspond to texture3D bounds [0,1].

Sorry for not providing any shot, our website is down due to the probs on internet right now

[This message has been edited by Ozzy (edited 12-02-2003).]

It’s not inverse(modelview_projection), it’s
inverse(projection_viewportdepthrange).

You’re going from window space to eye space.

Sorry cass i don’t understand what u mean by wiewportDepthRange matrix? there are informations about the viewport and depthRange separately but i don’t know how to build a matrix with them??

Originally posted by Ozzy:
Sorry cass i don’t understand what u mean by wiewportDepthRange matrix? there are informations about the viewport and depthRange separately but i don’t know how to build a matrix with them??

The viewportdepthrange matrix is just a matrix that maps

x = x * (w/2) + (w/2)
y = y * (h/2) + (h/2)
z = z * 1/2 + 1/2
w = w

(assuming the typical viewport and depthrange)

Should be no problem to write this in
matrix form and concatenate it with the
projection matrix, then invert it.

Thanks -
Cass

Originally posted by cass:

Should be no problem to write this in
matrix form and concatenate it with the
projection matrix, then invert it.

Assuming the projection matrix can be inverted. (Not a given, since there’s absolutely nothing wrong with a singular projection matrix.)

But while back transforming the window coordinates to eyespace per fragment can get the job done, I can’t help but ask why do it that way?

I understand that the request is not to use a vertex program if at all possible. It’s quite possible, use fixed function TEXTURE_GEN_MODE of EYE_LINEAR.

ATTRIB Pshad = fragment.texcoord[1]; # TEXTURE_GEN_MODE EYE_LINEAR
TEMP vdir;
DP3 vdir.w, Pshad, Pshad;
RSQ vdir.w, vdir.w;
MUL vdir.xyz, Pshad, vdir.w;

fin?

-mr. bill

Originally posted by mrbill:
Assuming the projection matrix can be inverted. (Not a given, since there’s absolutely nothing wrong with a singular projection matrix.)

True, but they’re certainly very uncommon.
Driver developers cannot rule out singular
projection matrices, but application developers can easily do so.

[b]
But while back transforming the window coordinates to eyespace per fragment can get the job done, I can’t help but ask why do it that way?

I understand that the request is not to use a vertex program if at all possible. It’s quite possible, use fixed function TEXTURE_GEN_MODE of EYE_LINEAR.

ATTRIB Pshad = fragment.texcoord[1]; # TEXTURE_GEN_MODE EYE_LINEAR
TEMP vdir;
DP3 vdir.w, Pshad, Pshad;
RSQ vdir.w, vdir.w;
MUL vdir.xyz, Pshad, vdir.w;

fin?

-mr. bill[/b]

Agreed, this is also another way to do it.
On older hardware, this is the only way.

In terms of total calculation though,
it’s not much better, and potentially
worse.

Unproject to eye or world coordinates
has these costs:

constant: matrix math, load program constants
per-vertex: none
per-primitive: none
per-fragment: 4 x DP4,RSQ,MUL, and consumes one matrix worth of program constants

Interpolate eye or world:
constant: none
per-vertex: 4 x DP4 (may already be available), consume interpolant
per-primitive: any extra setup/clipping costs
per-fragment: perspective-correct interpolation, interpolant

In general, perspective-correct interpolation will be more efficient than unprojection, but triangle size and resource consumption becomes a factor.

The nice thing about unproject is that it requires no extra plumbing. As a shader writer, it is then convenient to rely on the ability to “transform” between some well-known spaces (like RenderMan does) within the shader without having to change the input parameters to the shader.

I agree with MrBill that making use of perspective correct parameter interpolation is almost certainly the most efficient way to effect unprojection today, but I wouldn’t be surprised to see the trend go toward the RenderMan style of transform for simplicity of shader design.

Thanks -
Cass

Originally posted by SeskaPeel:
[b]Ozzy before normalizing it, you need to unproject it …

the way I do is funny :
…

My own little trick, FWIW …

SeskaPeel.[/b]

I tried it out but I still get very denormalized vector

I’m having an issue with mipmapping.
Once it’ll be fiwed, I’ll check if the normals have correct norm with my method.

SeskaPeel.