Yep, but not the light modelview, just the light viewing transform (we want world-to-lt-eye, not object-to-lt-eye).
And also, just to be perfectly clear, the order you listed the matrices is the reverse of the application order relative to the vector. You want matrix (operator) application order to be the following:
[ol][li] camera inverse viewing transform (M1) – takes you from camera eye to world[] light-space viewing transform (M2) – takes you from world to light eye[] light-space projection transform (M3) – takes you from light eye to light clip[/ol][/li]Since OpenGL follows the column-major operator-on-the-left convention, that means in the OpenGL notation that you want: (M3M2M1)v1 = v2, where v1 is a camera eye-space vector and v2 is a light clip-space vector. So the matrix you want to pass into your shader is M=M3M2*M1.
But I think you were compensating for this apparent order reversal due to operator order convention in your reply, which is why I said you had it right, except for the modelview thing.
Still I see no reason to “enable depth comparisons (TEXTURE_COMPARE_MODE == COMPARE_R_TO_TEXTURE)”
I mean how does this affect my shader?
It determines whether you use built-in hardware on the GPU to do the depth comparison “outside” your shader (and optionally multiple lookups with filtering “outside” your shader – aka PCF), OR you get have to fetch raw depth values in your shader and do the comparisons/filtering yourself.
The former is faster and sufficient if all you need is single binary depth compare or basic PCF shadow map lookups. The reason is that (on NVidia hardware at least) there is dedicated logic on the GPU to do these depth comparisons (and filtering) if you want it.
The way it affects your shader is that if you enable depth comparisons, the result you get back from your texture lookup is the result of the depth comparison, NOT the raw depth value from the depth texture.
If you “do” want hardware depth comparisons, use a Shadow sampler (e.g. sampler2DShadow), enable depth comparisons on the texture, and use a shadow texture sampling function in your shader (if using GLSL 1.2 or earlier, else just use texture*).
If you “do not” want hardware depth comparisons, use a non-Shadow sampler (e.g. sampler2D), disable hardware depth comparisons, and use a non-shadow texture sampling function (again if GLSL 1.2 or earlier; otherwise just use texture*).
Note that you can use a depth texture in either case. And for the latter case, you can use pretty much any other texture format you want as well.
I just want to know how to compare the 2 depths in the shader.
In the fragment shader of the spot light I have SPos which is the position in the camera-space of the fragment. I imagine it’s Z component should be the depth. How do I calculate the depth that I have to compare it to?
Ok, so you want to do your own depth comparisons. So use a sampler2D, not a sampler2DShadow. Also, don’t enable depth comparisons on that texture. Then, when you do a texture lookup, you’ll get the light clip-space depth value associated with that position.
Now you need to get your fragment position in light clip-space in the fragment shader so you can do that texture lookup and depth comparison. There are lots of ways to do that. One is to pass in a varying from the vertex shader which is your light clip-space vertex position interpolated across the polygon. To get it, in the vertex shader, you first compute the vertex position in camera eye-space (gl_ModelViewMatrix * gl_Vertex). Then you multiply it by the “M” we discussed above which you passed in (i.e. M=M3M2M1) to transform that camera eye-space position to a light clip-space position. Then you let the GPU interpolate that position across the polygon.
I mean I know I have to use use shadow2D or something similar but just can’t get it.
No, if you truly want to manually do your own depth comparison in your fragment shader (e.g. mydepth < shadowmapdepth test, or something else more slick like VSMs), then you don’t use shadow2D. You’d use texture2D (or more likely texture2Dproj, since you’re doing a shadow map from a positional light source, which uses a perspective projection, and thus requires a perspective divide).
Note that only prior to GLSL 1.3 were there separate functions for depth-compare vs. non-depth-compare shadow lookups (e.g. shadow2D vs. texture2D). In GLSL 1.3, they realized that this and other things were causing a needless explosion in the number of texture sampling function names, so they removed all the typing stuff out of the names, and both of the above mapped to a simply-named “texture” function.
So for instance shadow2DProj and texture2DProj both mapped to textureProj in GLSL 1.3.
And there was a tutorial at ziggyware but the website is now dead and I can’t even find it’s archive on www.archive.org
Try these. They aren’t perfect, but they’re a good place to start, and they do use GLSL:
The latest Orange Book also has some good stuff too.