Float Equality

Hi,

I have recently implemented depth peeling in my program and I seem to have problems with floating point equality in my fragment shader.
First, I know you should not compare floats for equality directly, but use an error margin because of the inaccuracy. However, in the article about dual depth peeling I have read, a depth value fetched from a 32-bit float texture is compared to every incoming fragment’s depth value, directly with ==. The author says that this is safe because both values are 32 bit floats (and if the one fetched from the texture was previously computed from the same fragment, the values should compare equal).
My first question is: In the described case, can I rely on these two floats to compare exactly equal on every OpenGL implementation?

Another problem is that I want to port my implementation of depth peeling to OpenGL versions which don’t support float depth textures (and float textures in general). I want to do this by using GL_DEPTH_COMPONENT32 or similar texture formats. In this case, I guess equality testing is not reliable, because the depth value might be computed as float from the fixed-function portion of OpenGL, and must be converted to a normalized integer, where the value is slightly changed.
However, I tried this (even with GL_DEPTH_COMPONENT24 or 16) with equality testing and on most implementations I tested it on, it worked perfectly. There’s only one implementation (Gallium3D Radeon driver on X11/Linux) on which the check only works using a small error margin. It might just be luck that it works on many implementations (is it?), but what confuses me is, when I use the following statement in my fragment shader:

gl_FragDepth = gl_FragCoord.z;

the equality comparison magically starts to work (to be exact, I use <= instead of ==, and only the <= condition starts to work). If I understand right, this should be exactly what OpenGL does if this statement is missing, so why does it “work” when I use this statement?

Thanks in advance!