I have used cubemaps in the fixed pipeline, but have moved over to shaders. I was able to quickly implement cubemapping in GLSL, using an example from the Orange Book’s site. I like the fact that shader cubemaps don’t get all wavy and distorted.
However, the cubemap rotates with the camera. I want the cubemap to move with the camera, and not rotate, ever. Can anyone help?
If you want to use the cube map for reflections, you could simply calculate the reflection vector in world space instead of eye space. This way the map wouldn’t change when changing the camera orientation.
Your code makes it so the cubemap doesn’t move with the camera. It just stays competely stationary, as though it were a 2D texture.
I am guessing I just need to add the camera position to the vertex position, but I am completely new at GLSL. I don’t know if adding the camera position is what I am after, and even if it is, I am not sure how to go about doing that!
Well, this is actually one thing that sucks in OpenGL (and will hopefully get fixed in OpenGL 3.0): You will need the WorldMatrix (or in OpenGL Terms: The Modelmatrix). However, since OpenGL only has the Modelview Matrix you will need to provide the Modelmatrix yourself.
The cubemap appears upside-down, but that is a small matter.
Now when I have a model that is rotated and moved around, the results are totally wrong because of the rotation. If I transform the camera position to the object’s coordinate system, I get semi-correct results, except that the cubemap appears at the same orientation as the object! If an object is lying on its side, the cubemap will be rotated the same way. Obviously, this is because I transform the camera position to the object’s coord system.
I need some way to figure out the vector between a vertex and the camera, in world space. First, I changed the camera routine so that when I position and rotate the camera, I am using the projection matrix instead of the modelview matrix. I have it working for non-rotated world geometry, but it still doesn’t work when I have a rotated object.
I am just passing the camera position into the shader from the main program. I am trying to get the vertex position in world space. Multiplying it by gl_ModelViewMatrix doesn’t seem to do it.
Okay, now I am passing the inverted view matrix into the texture matrix, and accessing that in the shader. Here’s what is happening:
glMatrixMode GL_PERSPECTIVE
set up the viewport here
glMatrixMode GL_MODELVIEW
set up the camera position and rotation
In the shader, the vertex position is multiplied by the ModelView matrix, which includes the camera rotation(!). So we want to multiply the result of that operation by the inverse of the camera part of the modelview matrix.
I’m still trying to figure out exactly what it needs. I am trying to get the modelview matrix after I rotate the camera, then pass the transpose of that into the shader, in the GL_TEXTURE matrix.
Although this is an old topic, I am also trying to get this right. I have a moving camera and a moving object. I want the surface to be reflective, using cubemapping.
I have tried many things with various results but there is always some rotation missing here or there. My latest trial is something like:
The result is still not correct. This should be one of the basic tutorial examples for GLSL, yet I haven’t seen any working example for moving camera and moving objects.
Please let me know if anyone has managed to solve this problem.