Projective texturing problem with shaders

Hi,

I’m have a problem with projective texturing.

So I decided to hold on the projective texturing, and manually project the scene using the light
matrices to see if its right.

Right now, I send my light projection matrix, light view matrix, camera view matrix inverse to the effect.

I retrieve the world/model space matrix like so,

ModelMatrix = MV * (V ^ -1)

where, MV is the gl modelview matrix
(V ^ -1) is the camera view inverse

then my light modelview is found like this,
light_modelview = ModelMatrix * lightViewMatrix;

then finally,
light_mvp = lightProjectionMatrix * light_modelview;

outputPos = mul(light_mvp, position);

<I stopped trying projective texturing because on testing the output from light_mvp, i found it was wrong, so decided to fix this first before moving onto projective texturing>

the output is wrong, as when i move the camera, the scene rotates weirdly, when this shouldn’t happen,
it should only show the scene from the lights view.

I’m quite sure the input matrices are correct as when testing with the main camera (ie: sending default camera instead of light camera), the scene render is perfect).

If I put the light position where the default user camera is, the scene render isn’t screwed, but is weird, it renders and tilts whenever I rotated the main user camera (when the main user camera shouldn’t have anything to do with the render)

What am I doing wrong here?

Thanks in advance.

You are multipling some matrices in incorrect order. When you combine the calculations you wrote the equivalent is


light_mvp = lightProjectionMatrix * MV * (V ^ -1) * lightViewMatrix

which is wrong.

But if I put the lightProjectionMatrix at the end, it totally screws up.

Can you please tell me the right order?

<Also, how come this yields right results with default camera, or with camera at light position?>

Edit:

I’ve fixed the order, but it still doesn’t work with the light camera, if keep the light camera in the same position,view as the default cam, then it works,

heres the corrected order,;

modelMatrix = viewInverseMatrix * modelViewMatrix

lightModelViewMatrix = lightViewMatrix* modelMatrix

lightModelViewProj = lightProjectionMatrix * lightModelViewMatrix

finalposition = lightModelViewProj * position;

but all in vain, still minor problem which makes it incorrect, am I doing something wrong when getting the model matrix?

I would suspect that something is wrong with the lightViewMatrix and viewInverseMatrix matrices. Maybe they are transposed? If the light camera is in the same position as view camera, they will cancel itself so that would not matter. In other case the result will be wrong.

You can try to transform some nice vector trough the individual transformations and see where the things start to look wrong.

Hi Komat,

thanks a ton!!! I was using the standard glu tools, like gluLookAt etc and retrieving the matrix using glGetFloatv, but somehow, it seems that I need to transpose all my matrices.

I transposed, cameraViewInverse, lightView and lightProjection and its working perfectly. My question is how come I had to transpose these matrices if I retrieved them from OpenGL itself. <I am using Cg shader language, could that be the problem?> I hope this post helps other people who face the same problem.

Thanks again and if you know why I had to transpose these matrices, please tell, it may be useful for other people as well. <Note: if I use the gl state matrices directly, then this problem isn’t there, only with uniforms, this problem is there>

The likely reason is mismatch between row-major and column-major memory interpretations of the matrix as you move it around (i.e. one api stores it in one way while the next part of the program assumes it is stored in the oposite way).

Yeah, I thought it was the same problem (row/column major storage), but weird enough, all my other transformations seem to be working without the transpose. Oh well.

Thanks again.