High precision Opengl

Hello. I have read that the new graphic cards are computing internally at a better precision than 8 bit integer (they can even do 32 floating point operations). Also, I have found that DirectX allows setting the format to 32 bit precision (D3DFMT_A32B32G32R32F).

My question is how do I make this available in OpenGl and if it is possible to run vertex or pixel shader programs using this precision AND retrieve the result with the SAME precision.

Thanks.

Look at floating-point buffer extensions(like NV_float_buffer or ATI_float ?? don’t remember).
You can then retrieve the texture data via glGetImage as floats.

ARB_Fragment_program has specific floating point requirements. It doesn’t go so far as to mandate an IEEE 32-bit float, but it does require values outside the range [-1, 1], and these values can be reasonably large.