OpenGL/GLSL ShadowMapping Texture Matrix Issue

Hello,

I am currently implementing GLSL shadows as described in this tutorial:
http://fabiensanglard.net/shadowmapping/index.php

It works quite well with the default Matrix transformations but as soon as I set up my camera like this:


glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	gluPerspective(fieldOfView, aspectRatio, zNear, zFar);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
	gluLookAt(posCoord[0], posCoord[1], posCoord[2], eyeCoord[0], eyeCoord[1], eyeCoord[2], upVec[0], upVec[1], upVec[2]);

	glScalef(1, -1, 1);
	glTranslatef(0, -h, 0);

everything is fucked up. I do the scaling and translating so that 0,0 is the top left and not the bottom left as it is openGLs default. I am pretty new to matrix calculations and I tried everything I could possibly think of on the texture matrix to make it work with this transformation but I failed. Does anybody have an idea about how to fix that? It’s propably something simple I just don’t see.

I setup my Texture Matrix like this:


void setTextureMatrix()
{
	static double modelView[16];
	static double projection[16];
	
	// This is matrix transform every coordinate x,y,z
	// x = x* 0.5 + 0.5 
	// y = y* 0.5 + 0.5 
	// z = z* 0.5 + 0.5 
	// Moving from unit cube [-1,1] to [0,1]  
	const GLdouble bias[16] = {	
		0.5, 0.0, 0.0, 0.0, 
		0.0, 0.5, 0.0, 0.0,
		0.0, 0.0, 0.5, 0.0,
	    0.5, 0.5, 0.5, 1.0};
	
	// Grab modelview and transformation matrices
	
	glGetDoublev(GL_MODELVIEW_MATRIX, modelView);
	glGetDoublev(GL_PROJECTION_MATRIX, projection);
	
	
	glMatrixMode(GL_TEXTURE);
	glActiveTextureARB(GL_TEXTURE7);
	
	glLoadIdentity();	
	
	glLoadMatrixd(bias);
	
	// concatating all matrice into one.
	glMultMatrixd (projection);
	glMultMatrixd (modelView);
	
	// Go back to normal matrix mode
	glMatrixMode(GL_MODELVIEW);
}

Thank you!

the scale and translation you set up affect the modelview matrix, so the drawn geometry is scale (turned upside down in your case) and translated.

To move the screen origin you have to modify the projection matrix not the modelview one.

hmmm I know what I am doing, thats the only solution I had in mind to make 0,0 the top left. How would that look on the Proj Matrix? Or do you mean to fix the bug I have to do the same thing on the Proj Matrix?

What I tried is this

glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	gluPerspective(fieldOfView, aspectRatio, zNear, zFar);
	glScalef(1, -1, 1);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
	gluLookAt(posCoord[0], posCoord[1], posCoord[2], eyeCoord[0], eyeCoord[1], eyeCoord[2], upVec[0], upVec[1], upVec[2]);

that seems to have the same effect (shifts the origin to the top left) but the shadow map does still not work. Any ideas? Thanks!

No you can’t do this this way. This is normal that you obtain the same effect because vertex transformation is done in this order:

screen_position = Window_transformation * Projection * Modelview * vertex_pos

What you want to do is move the window coordinates. So you have to do it after perspective transformation, i.e before the gluPerspective call.
If you do:


glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1, -1, 1);
gluPerspective(fieldOfView, aspectRatio, zNear, zFar);

the rendering will be turned upside down.
You can use glFrustum instead of gluPerspective for a better control of the left, right, bottom, up values.

no other way? i don’t really like using glFrustum. So if I get you correct in the order I do it right now, there is no way to get the shadow map fixed?

because no matter where I put the glScale(1,-1,1) the Texture Matrix is still fucked up.

Thanks for your patience.

Edit: Another thing I noticed is that the shadows only get fucked up when the scaling is negative if that helps in any way.

As ar as I understand, I see no other way, but can I know why you want that the screen coordinates origin is the upper lefp corner?
What I can’t explain right now is why this transformation corrupt your shadows… I can’t say nothing more without the entire code… are you sure it is taken into account when you transform fragment position into light space.

In addition, I have not looked the tutorial code yet, but with shaders, you should not have to pass the transformation matrix from the camera point of view (the one passed through the texture matrix stack). Since vertex coordinates are given in object space, it is possible to transform then into light clip space asssuming you pass the light matrices to your shader.

actually it’s just because I am used to have the origin at the top left and I would like to stick with that for now.
I am suprised that it is so difficult in this case.

I will think about it a little more tomorrow. Thanks for your help so far! in the vertex shader I do nothing more than to compute the shadowCoord like this with the texture matrix:
ShadowCoord= gl_TextureMatrix[7] * gl_Vertex;

the full shader code is posted at the tutorial site!

Glad to read my tutorial helped someone :smiley: !

The shadowmapping works in 3 steps:

1/ Generate the shadowmap. Save the modelview/proj matrix, add the x*0.5 + 0.1.
2/ Render the scene normally
3/ While rendering use the saved matrix to lookup the shadowmap

I’m not sure where you’ve add your scale and translate, but probably only in step 2.

What you need to fix then, is to modify the matrix you save in step 1, you need to perform the inverse transformation:

glScalef(1, -1, 1);
glTranslatef(0, h, 0);

so just after the line:

glMultMatrixd (modelView) in function setTextureMatrix:

add:

const GLdouble invSca[16] = {
1.0, 0.0, 0.0, 0.0,
0.0,-1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0};

glMultMatrixd (invSca);

const GLdouble invTrans[16] = {
0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0,
0.0, h, 0.0, 1.0};

glMultMatrixd (invTrans);

Ah, I fixed it thank you so much for your Help, for some reason it was not only that. The scaling also fucked up the order of the faces so to fix it completely I had to turn of Face Culling and to minimize the moir pattern I used this instead of ofsetting the DistanceFromLight in the shader (because it also works without face culling):

  
glPolygonOffset(5.0f, 0.0f);
			glEnable(GL_POLYGON_OFFSET_FILL);
			drawStuff();
			setTextureMatrix();
			glDisable(GL_POLYGON_OFFSET_FILL);

Thanks again!

Edit:
I just checked a few more things and its pretty weird. The problem seemed to be only due to the face culling. I didn’t even need the extra transforms on the texture matrix…I am confused.

Negative scales are sources of many problems when done on the modelview matrix as you seem to still do. Since modelview matrix is used to transform normals (more precisely, the madelview inverse transpose), all normals are flipped and you have to change the cull face order properly.

If you were transforming coordinates in window space you would not have these problems as I said.

Okay, Thank you. I will try to figgure out a way to do that. Right now I do the scaling on the Proj Matrix and it works as long as I keep in mind that the backFaces become the front faces (if that makes any sense).

I don’t see any possibility of making increasing y go down than a negative y scale on the Proj or Modelview matrix. If you have any other idea which does not include a negative scale please let me know :slight_smile:

Is is possible to see what you finally do in your code on projection and modelview matrix stacks? I am pretty sure you do something wrong, because normals should not be flipped.

this is what I do:

I set up lights and Cameras PoV matrix like this:


glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	glScalef(1, -1, 1);

	gluPerspective(fieldOfView, aspectRatio, zNear, zFar);
	glMatrixMode(GL_MODELVIEW);
	glLoadIdentity();
	gluLookAt(posCoord[0], posCoord[1], posCoord[2], eyeCoord[0], eyeCoord[1], eyeCoord[2], upVec[0], upVec[1], upVec[2]);


after setting the lights PoV I save the Matrix like this:


void setTextureMatrix()
{
	static double modelView[16];
	static double projection[16];
	
	// This is matrix transform every coordinate x,y,z
	// x = x* 0.5 + 0.5 
	// y = y* 0.5 + 0.5 
	// z = z* 0.5 + 0.5 
	// Moving from unit cube [-1,1] to [0,1]  
	const GLdouble bias[16] = {	
		0.5, 0.0, 0.0, 0.0, 
		0.0, 0.5, 0.0, 0.0,
		0.0, 0.0, 0.5, 0.0,
	0.5, 0.5, 0.5, 1.0};
	
	// Grab modelview and transformation matrices
	
	glGetDoublev(GL_MODELVIEW_MATRIX, modelView);

	glGetDoublev(GL_PROJECTION_MATRIX, projection);
	
	glMatrixMode(GL_TEXTURE);
	glActiveTextureARB(GL_TEXTURE7);
	
	glLoadIdentity();	
	
	glLoadMatrixd(bias);
	
	// concatating all matrice into one.
	glMultMatrixd (projection);
	
	glMultMatrixd (modelView);
	
	// Go back to normal matrix mode
	glMatrixMode(GL_MODELVIEW);
}

that works just fine as long as I don’t use face culling in the same way as before (because it seems that GL_FRONT Faces become the back faces and GL_BACK the front ones).

So the negative scale somehow mixes up the order of the vertices.

By the way, I am only rendering glutSpheres and glutBoxes.

Ah, I just checked and actually face culling is based on window coordinates winding… so after projection. I do not why but that explains why the negative scale on the projection matrix change the vertices order.