View Full Version : Internal data representation in OpenGl

10-05-2004, 08:41 AM

is there a default data type that (all?) opengl implementations use for internal representation and calculation of coordinates?
That is important for me because if internal data type is 32-bit-float, round errors will occur on big numbers. Knowing the internal data representation i can calculate how big my numbers may be before reaching critical rounding precision.

10-06-2004, 01:28 AM
Hi !

The OpenGL spec. does not specify any internal format, but most OpenGL implementations use 32 bit float for internal representation of vertices,colors and normals as far as I know.


10-06-2004, 04:19 AM
I think colors are mostly in 8 bit per color component (apart from very new texture formats).

10-06-2004, 05:20 AM
I guess you could check out the gl.h file to see what are the formats. It could differ (though I doubt it) according to the implementation.