is there a default data type that (all?) opengl implementations use for internal representation and calculation of coordinates?
That is important for me because if internal data type is 32-bit-float, round errors will occur on big numbers. Knowing the internal data representation i can calculate how big my numbers may be before reaching critical rounding precision.
Thx
The OpenGL spec. does not specify any internal format, but most OpenGL implementations use 32 bit float for internal representation of vertices,colors and normals as far as I know.