Is there a difference between the sphere mapping in Microsofts software implementation and the sphere mapping in NVIDIAs drivers? I see different reflections (rotated 90 degrees) in this implementations.
Are there additional parameters to adjust it or do I have to adjust the normal vectors manually in the different implementations?
Anybody has similar experiences with other graphic accelerators?
Yup I’ve got a similar prob. Doing some sphere mapping and on a TNT2 its beautiful. But on a pc with nothing that uses the basic MS drivers its as if the map is stretched vertically not horizontally?
On the subject of sphere mapping. I was wondering if anyone else wanted to enact sphere mapping in such a way that the reflection texture coordinates are calculated based on a reflection coming from a source other than the camera itself as OpenGL does when it generates texture coordinates. I would like to do this as I am experimenting with having the camera chase an object. It orients itself directly behind the object (a hover car.) I would like it so that when I turn the car the reflection map changes along with the car’s rotation as it appears when I rotate the car and the camera is in a fixed location.
Right now the map stays exactly the same since the camera aligns itself perfectly behind the car (it moves slightly, but not enough to get the effect I want.)
I know what the sphere mapping formula is and I believe I can probably calculate the texture coordinates myself, but I figure if I let OpenGL do it, it may be faster.
Has anyone else messed with this? Could the GL_TEXTURE_MATRIX have something to with it?