Ok, so I'm trying to implement a basic depth bounds test to offload my pixel shaders a bit, however I'm stuck when trying to compute the needed znear and zfar arguments.
As far as I know, the default perspective projection matrix takes the z coordinate in eye space and transforms it into clipspace, according to the specified znear and zfar clip planes. After the perspective divide (w), we end up with a device space z coordinate in the interval [-1; 1], which is then scaled and biased to fit in the [0; 1] interval (the actual z-buffer value is then obtained by multipliyng by 2^precision_bits - 1).
That's what gluProject should do - I checked the Mesa implementation; so, if I haven't totally misunderstood the above, if a point in eye space is between the znear and zfar planes, the obtained device z coordinate should be in the specified interval [0; 1]; anything in front of znear should yield z < 0, and obviously, anything behind zfar > 1.
What amazes me, however, is that obviously that's not true... I have tested it a dozen times; so what I'm doing is, I calculate two bounding points of a given light (a "near" and a "far" one) by subtracting/adding a scaled "forward" vector from/to the light position. The forward vector is obtained either from the modelview matrix (elements 2, 6 and 10) or by my own routines, with the same results. Furthermore, it's scaled by the light radius.
So far so good, but I get totally strange results. The device space z coordinate of a point *in front* of the znear plane is greater than 1, while points between znear/zfar are little below 1. I am *sure* there isn't a bug in my code (i.e. the world space "near" and "far" points are correct).
Any help is appreciated, thanks!