Environment mapping as seen in Quake 3

Now it is time for me to ask.
Can someone describe or suggest a solution to achieving environment mapping as seen in quake 3. It is ( as I think ) done with spherical or planar mapping texture coordinate generation.
I am not interested in mixing textures, etc. - just want to know about the math behind it.
The problem is that generic spherical mapping depends on camera rotation ( it uses eye coordinates ), also it distorts badly when applied to plane, when in quake 3 it does not. So it seems that we need to use planar mapping and/or playing with texture matrix.
Thanks.

The way I do it is:

take your transformation matrix and make a transposed copy of it. multiply your rotation matrix my the transposed one. Then with the resulting matrix, transform your vertex normals. The UV coords of each vertex can then be computed with:

u = 0.5 * transformed_normal.x + 0.5
v = 0.5 * transformed_normal.y + 0.5

This seems to work great for me, and looks very nice.

Thanks, Fenris. Btw … the formula look a lot like generic spherical mapping - though you operate with normal vector, not reflection one and it is a assumed to be unit-length.

Interresting method,
could you explain why all these transformations?
What is the result if you multiply the 2 matrices. And why multiply it with the normals?

Cheers

The normal is the thing telling you in what direction the face is pointing, and the direction is the only one telling you what the reflection will look like (I mean what you will see in the reflection, not how it will look because of material properties). This is why you use the normal.

And you also need to know how it’s oriented in worldspace, i.e. relative to the viewer. This is why you need to transform it.

Well ok I got that,
but why do we need to transpose the
transformation matrix and multiply it with the rotation matrix. Seems to me that only
using the rotation matrix would be fine.
Can we just use the OGLs Modelview matrix do
get the matrix needed? Or do we have to store
seperaely the transformations and rotations for each object.

Dimi

I don’t know the actual transformation needed to create this kind of enviroinment mapping, but multiplying only the modelview isn’t enough. But if you do some maths, and calculate how the transformation matrix would look like to do the reflections, you will find out that this matrix is the same as the modelview multiplicated with it’s own transposed version.

First you need to transform the normal so it’s relative to the viewpoint, this is why you need to multiply with the modelview matrix. But now you only have the normal as it is transformed, now you need to transform the normal again to get the reflection. And this matrix happens to be the transposed modeview matrix.

But you can also let OpenGL to perform environment mapping. Dunno if it’s the same or similar as in Q3A, but it’s quite nice anyway. Use texture coordinate generation, and generate sphere mapping coordinates.

glTexGeni(GL_S,GL_TEXTURE_GEN_MODE,GL_SPHERE_MAP);
glTexGeni(GL_T,GL_TEXTURE_GEN_MODE,GL_SPHERE_MAP);
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);

… and you should be ready to go. Just remember to pass vertex normals with your vertices, and enable the reflection texture.