Hello. I have read that the new graphic cards are computing internally at a better precision than 8 bit integer (they can even do 32 floating point operations). Also, I have found that DirectX allows setting the format to 32 bit precision (D3DFMT_A32B32G32R32F).
My question is how do I make this available in OpenGl and if it is possible to run vertex or pixel shader programs using this precision AND retrieve the result with the SAME precision.
Thanks.