Nvidia depth textures & fragment program problems

I am currently trying to implement Order Independent Transparency as described in the paper by Cass Everitt. However, rather than enabling the ARB_shadow comparison, I planned to do the comparison manually in an ARB_fragment_program, by reading from a depth texture and comparing with fragment.program.z.

This is where my problems began. The value retrieved by sampling a depth texture in a fragment program, when TEXTURE_COMPARE_MODE_ARB is NONE, seems to be undefined (GF FX 5200, 45.23 dets). So, doing the comparison manually will not work. Also, these same undefined values are retreived when doing a glGetTexImage on a depth texture.

The reason I wanted to do the comparison manually is because, as stated in the ARB_fragment_program spec:

The texture comparison introduced by ARB_shadow can be expressed in 
terms of a fragment program, and in fact use the same internal 
resources on some implementations.  Therefore, if fragment program 
mode is enabled, the GL behaves as if TEXTURE_COMPARE_MODE_ARB is 
NONE.

Tom Nuyden’s demo of this technique on www.delphi3d.net sets the TEXTURE_COMPARE_MODE to GL_GREATER and uses the result of the comparison in the fragment program. This demo works on nvidia hardware, thus the drivers do not comply with the specifications.

The specs for NV_fragment_program on the other hand, say that the comparison result should be used within the fragment program. The drivers seem to use the NV_f_p behaviour when an ARB_f_p is bound.

Tom’s demo will not run on ATI hardware. This is probably because ATI’s drivers are correct and will disable a comparison when fragment program mode is enabled.