What was the original reason for creating the opengl typedefs such as GLint and GLfloat?
Is GLint always assumed to correspond to int, or is it really supposed to be an int32_t? Is it so that GPU primitive types can remain static in case cpu primitive types change for different architures (32 vs 64)?
The reason I ask is because when designing a library that uses opengl internally, I'd like to design classes that wrap opengl objects such as shaders. If the class contains say a GLint to hold a shader object, that means anyone using my header has to #include the opengl header files as well to get the typedef. This could be problematic if I'm using glew or some other extension mechanism as it could cause name conflicts. glew chokes if you #include gl.h before it. Also it slows down compilation because these gl headers are pretty large and all you really want are some typedefs.
So here are a couple possible solutions.
1) #include glew or gl.h or whatever extension loading mechanism I'm using in all of my public library header files, exposing that implementation detail to library users. I don't like this at all because now if I switch from glew to glLoadGen or something else, library users will be affected. The users should not care what extension loading mechanism is used.
2) Assume GLint is always int (likewise for other types) and just use native types in my classes.
3) Create a minimal GL header which is distributed with the library that only defines these typedefs.
How do you like to handle this issue?