OT but any ideas?

Hi - I’m using the PS2 Linux kit for a project - implementing it with the OpenGL-like API - ps2gl - however there is no support for glTexGeni - and thus no support for GL_SPHERE_MAP - and of course that means no easy Environment mapping…

I know that really i’m going to have to implement all the maths myself - although at the moment i havent got time as there are other things to do first - so my question is - are there any cheap hacks to get reflection map like texture coordinates available? (that won’t be too much of a pain to set up)

I found this: http://romka.demonews.com/graphics/doc/doc/envmap_evg_eng.htm

however it doesn’t seem to work…

I know i’m being lazy here - but if anyone has any suggestions that would be greatly appreciated.

Thanks,

Caspar.

(thanks dorbie - hadnt noticed my mistake )

[This message has been edited by Auto (edited 08-01-2002).]

There are a few ways to do this. One is to rotate the vertex’s normal into eye space and simply use the (x,y) components as your (s,t) texture coordinates.

glBegin()
glNormal(normal);
rotatedNormal = ModelView*normal;
glTexCoord2f(rotatedNormal.x,rotatedNormal.y)
glVertex(vertex)
glEnd();

It’s cheesy, but looks ok.

nice one thanks - although how do you multiply the normal by the modelview matrix?

Actually you use the inverse transpose modelview to transform the normal. Check the red book for that. To multiply it is easy. Here is one example:

typedef float vec4_t[4];

vec4_t ModelIT[3];

ModelIT[0] = GetFirstColumn( MODELVIEW_INVERSE_TRANSPOSE );
ModelIT[1] = GetSecondColumn( MODELVIEW_INVERSE_TRANSPOSE );
ModelIT[2] = GetThirdColumn( MODELVIEW_INVERSE_TRANSPOSE );

eyeNormal.x = dot( ModelIT[0], normal.x );
eyeNormal.y = dot( ModelIT[1], normal.y );
eyeNormal.z = dot( ModelIT[2], normal.z );

Normalize( eyeNormal );

There, kind of a silly example but I hope you understand it.

-SirKnight

[This message has been edited by SirKnight (edited 07-30-2002).]

Can’t seem to find a hell of a lot relevant in the red book - there’s something in Appendix B about using glGetFloatv() to access queryable state variables or whatever - but i think i’m probably barking up the wrong tree with that…

How do you get access to these matrices?

Are these all like specific gl commands? - i’m not sure how much ps2gl has support for that - maybe i’ll have to look into the matrix definitions… hmmm still a bit confused…

thanks though

Yup, you’re on the right path. You get the current modelview matrix via :

GLfloat matrix[16];

glGetFloatv( GL_MODELVIEW_MATRIX, matrix);

GLfloat normal[4]; // normal[3] is 0.0f
GLfloat rNormal[4]; // rotated normal

rNormal[0] = matrix[0] DOT normal;
rNormal[1] = matrix[4] DOT normal;
rNormal[2] = matrix[8] DOT normal;
rNormal[3] = matrix[12] DOT normal;

normalize(rNormal);

Give that a whirl

ok cool - so if i’m right here DOT meaning calculating the dot product of the normal vector with the corresponding part of the matrix(?) or are you meaning like the maths “.” multiplication - i’m presuming the former though i’m too tired to compute why that would work

so this returns the normal in eye coords… and hence texture coordinates - so i presume once eye coordinates for the normal are calculated - i could in fact later (time permitting) use the maths explanation in the red book to implement ny own sphere map code…

well maybe later…

Thanks for the help,

Caspar.

Yup, it’s the vector dotproduct. Expanded it would like like so :

rNormal[0] = matrix[0] DOT normal;

same as

rNormal[0] = matrix[0]*normal[0] + matrix[1]*normal[1] + matrix[2]*normal[2] + matrix[3]*normal[3];

You can take out the last multiply (matrix[3]*normal[3]) and also take out the calculation of rNormal[3] since normal[3] has to be 0.0 (by definition of a normal).

Once you got rNormal caluclated, just use rNormal[0] & [1] for your S & T coordinates of the envmap. It’s not physically correct, but looks pretty decent.

SirKnight said MODELVIEW_INVERSE_TRANSPOSE, because you’re NOT supposed to use the modelview matrix. Use the transpose of the inverse of the MV matrix.

Look in the red book index under “normal vectors, transforming”, and it will guide you to Appendix G, “Transforming Normals” which explains why you don’t use MV.

Gee, using the index isn’t that hard.

indeed yes - well i did say i was being lazy - and i’m just about to go to bed - that being not really in the frame of mind to think about getting the inverse of the transpose modelview matrix - however, point taken indeed - i checked the ps2gl docs and found that glGetFloatv() is a working function - so i’ll try some of this in the morning.

Thanks again,

Caspar.

[This message has been edited by Auto (edited 07-30-2002).]

rlskinner - I wasn’t concerned with accurate results. My method does work and it looks ok. You can use whatever matrix you want, the trick is to use a transformed normal as your texture coordinates. We’ve shipped games with that method and nobody’s ever complained about incorrect looking environment mapping. As I said, it’s cheesy but looks ok.

Actually the red book describes this in Appexdix F: Homogeneous Coordinates and Transformation Matrices - Transforming Normals page 671 in the 3rd edition.

I would suggest giving that page a good once over. Here are two quotes explaining it. These were part of a conversation a year ago from two people explaining why normals should be transformed by the inverse transpose MV.

Quote by Zeno:

First a comment on transforming normals: I think you have the right idea. Normals are special in that they’re supposed to be unit length, yet are described by a single point in space (3 coordinates). This means that the set of all normalized normals describes a spherical shell around the origin. Anything in the modelview matrix that is not a plain rotation (like scaling or translation) will cause the tip of a normal to go off this shell. Taking the inverse transpose takes care of that problem.

Quote by Alexei_Z:

One more comment on transforming normals.
Actually, to transform normals from object space to eye space you need to rotate them and, in case of non-uniform scaling, scale their coords. Why can’t it be done by multiplying normals with the upper left 3x3 matrix of the modelview matrix? Because normal coordinates are scaled differently from vertex coordinates, though rotated in the same manner. That is when vertex coordinate increases, the corresponding normal coordinate at that vertex decreases. This fact makes us to use the inverse scaling part (while the regular rotation part) of the modelview matrix. Note, that in this case you still need renormalize normals after transformation. After some math, if you multiply the inverse scaling matrix by the rotation matrix, you’ll get the (upper left 3x3 of) inverse-transposed modelview matrix.

-SirKnight

I’m not sure I understand why it needs to be transposed. Is this because of the order of multiplication :

normal* matrix = transpose_matrix * normal

You do need to do the inverse the modelview and the reason for this can be derived by the plane equation.

The red book also shows the math for doing GL_SPHERE_MAP, in the glTexGen part.

V-man

OK well - i’m at the moment only looking for a cheap hack - so i’ll try the straight MV at the moment - but i guess to compute a proper Sphere map function i would need the true I_T_MV matrix, writing an inverse matrix function isnt the first thing i fance doing right now however

I hope they sort out the glDoAllMyMathsForMe() function in gl2.0

Thanks for the replies,

Caspar.

V-Man,

The inverse transpose of a matrix does the same rotation of the original matrix, but scaling is inverted. It turns out that inverted scaling is what you want for normals (If an object doubles in size on the x axis, then the x component of its normals needs to be halved). The normals will still need to be rescaled later (hence the need for RESCALE_NORMAL), but they will
point in the right direction.

Transposing a matrix will invert the rotation function, but the scaling part will still be left intact. For instance, a clockwise rotation becomes a counter-clockwise rotation, but it will still double things in size.

Inverting a matrix will invert every aspect of the transformation. Rotation and scaling both get inverted. So, to use the previous instance again, clockwise becomes counter-clockwise, but the doubling in size becomes halfing in size.

If you transpose then invert (as the notion of an inverse transpose matrix implies) then when you transpose you are first inverting the rotation but leaving the scaling as the original. Then by inverting you are uninverting the rotation and inverting the untouched scaling. The result is the original rotation with inverted scaling.

I have left translation out of this because even though I worked it out for myself before, I do not remember the result.

It would appear that transposing the inverse (invert then transpose) would give the same result, but since I have failed to consider translation that may not be a invalid conclusion in general.

Spheremap is done with texgen not texenv.