Getting the Vertices of a Planar Shadow Polygon

Hi,

I’ve implemented planar shadows and I’d like to calculate the four vertices of the shadow polygon.

I’ve created a shadowMatrix that contains the values that transform object vertices into points on the ground plane. The visual (OpenGL) part of this works as it should.

I need to calculate the shadow’s vertices. I thought that I could do this by multiplying the object’s vertex by the shadowMatrix (isn’t this what OpenGL does?). Unfortunately I’m not getting the correct result, especially for points on the ground plane which should be coincident with the actual vertex that creates them. Here’s what I’ve tried:

 

gShadowPolygon[0] = (shadowMatrix[0] * gWallVertices[0]) + (shadowMatrix[4] * gWallVertices[1]) + (shadowMatrix[8] * gWallVertices[2]) + shadowMatrix[12]; // x
gShadowPolygon[1] = (shadowMatrix[1] * gWallVertices[0]) + (shadowMatrix[5] * gWallVertices[1]) + (shadowMatrix[9] * gWallVertices[2]) + shadowMatrix[13]; // y
gShadowPolygon[2] = (shadowMatrix[2] * gWallVertices[0]) + (shadowMatrix[6] * gWallVertices[1]) + (shadowMatrix[10] * gWallVertices[2]) + shadowMatrix[14]; // z

Rather than being coincident, the shadow’s vertex is being offset in the -x direction. What am I doing wrong here?

Cheers,

Chris

Your calculations are only right if there’s no scaling at all. You should divide all your vertex component by shadowMatrix[3] + shadowMatrix[7] + shadowMatrix[11] + shadowMatrix[15]

Thanks for your reply.

What I don’t get is where the scaling is happening.

When I load the shadow matrix doesn’t it become the current matrix, which is then multiplied by the vertex?

I’m confused.

Cheers,

Chris

Here’s a demo of orthographic shadow projection:
http://www.sgi.com/products/software/opengl/examples/glut/advanced/source/projshadow.c

If you’re using a perspective projection (for point lights), it’s not much extra work to come up with an sensible projection matrix for that too.

Cheers

Hi,

I tried what jide suggested:

 divisor = shadowMatrix[3] + shadowMatrix[7] + shadowMatrix[11] + shadowMatrix[15];
	
/// Every vertex on the polygon casting the shadow must by multiplied by the shadowMatrix.
	
gShadowPolygon[0] = (shadowMatrix[0] * (gWallVertices[0] / divisor)) + (shadowMatrix[4] * ((gWallVertices[1] + 0.01f) / divisor)) + (shadowMatrix[8] * (gWallVertices[2] / divisor));
	 

However, this actually makes the problem worse and offsets the vertex even further (in the -x direction).

Any suggestions?

Cheers,

Chris

Your divisor is wrong. It should be the dot product of the 4th row of the shadow matrix with the vertex position (Vx, Vy, Vz, 1).

Cheers

You want to multiply each vertex by the shadow matrix, then divide the resulting vertex through by its W.

Note that if you’re using a matrix that was originally destined for OpenGL, you need to transpose it before multiplying vertexes on the right. You could equivalently multiply your vertexes on the left.

Cheers

Here’s a little song and dance on the subject, if you’d like some details:

http://www.hotlandingzone.com/projectiveshadow.html

Thanks guys, I’ll try implementing what people have suggested and see how I go.

Cheers,

Chris

Hi,

I’ve tried what was suggested.

Now, two of the points (those in the ground plane) are in the correct locations, however the other two are on the wrong side of the object casting the shadow.

It was mentioned that

if you’re using a matrix that was originally destined for OpenGL, you need to transpose it before multiplying vertexes on the right. You could equivalently multiply your vertexes on the left.
I’m not sure what is meant by vertexes “on the right”.

Here’s what I’ve tried:

divisor = (shadowMatrix[3] * gWallVertices[6]) + (shadowMatrix[7] * gWallVertices[7]) + (shadowMatrix[11] * gWallVertices[8]) + (shadowMatrix[15] * 1);
gShadowPolygon[6] = (shadowMatrix[0] * (gWallVertices[6] / divisor)) + (shadowMatrix[4] * ((gWallVertices[7] + 0.01f) / divisor)) + (shadowMatrix[8] * (gWallVertices[8] / divisor));
gShadowPolygon[7] = (shadowMatrix[1] * (gWallVertices[6] / divisor)) + (shadowMatrix[5] * ((gWallVertices[7] + 0.01f) / divisor)) + (shadowMatrix[9] * (gWallVertices[8] / divisor));
gShadowPolygon[8] = (shadowMatrix[2] * (gWallVertices[6] / divisor)) + (shadowMatrix[6] * ((gWallVertices[7] + 0.01f) / divisor)) + (shadowMatrix[10] * (gWallVertices[8] / divisor));
	 

How do I need to change the code to get the last two points in the correct position?

Cheers,

Chris

I suggest you familiarize yourself with basic matrix operations. It will make your life (indeed every one else’s) a lot easier. I hear the math & algos forum is good for questions along these lines. Better still, get yourself a decent book on the subject (google for introductory matrix/linear algebra books) :wink:

This stuff is fundamental and without it you’re pretty much up the creek for any non-trivial graphics coding, and you’ll end up relying on the sympathy of others to plod through this remedial math with you.

It’s one thing if the math is incidental to the subject and new ground is covered but this is basic matrix multiplication stuff and without that you don’t have a leg to stand on.

Cheers,

:slight_smile:

Thanks,

I’m already doing the math revision :slight_smile:

I’m confused about two things though:

  1. Why can’t I use the matrix that gets passed to OpenGL–after all, the shadow is being drawn correctly. It’s when I try to do the same operation that I get the wrong result. Is this explained anywhere (Red Book?)?

  2. I’m not sure what is meant by vertexes “on the right”–on the right of what?

Thanks for your help.

Cheers,

Chris