The opengl typedefs

What was the original reason for creating the opengl typedefs such as GLint and GLfloat?
Is GLint always assumed to correspond to int, or is it really supposed to be an int32_t? Is it so that GPU primitive types can remain static in case cpu primitive types change for different architures (32 vs 64)?

The reason I ask is because when designing a library that uses opengl internally, I’d like to design classes that wrap opengl objects such as shaders. If the class contains say a GLint to hold a shader object, that means anyone using my header has to #include the opengl header files as well to get the typedef. This could be problematic if I’m using glew or some other extension mechanism as it could cause name conflicts. glew chokes if you #include gl.h before it. Also it slows down compilation because these gl headers are pretty large and all you really want are some typedefs.

So here are a couple possible solutions.

  1. #include glew or gl.h or whatever extension loading mechanism I’m using in all of my public library header files, exposing that implementation detail to library users. I don’t like this at all because now if I switch from glew to glLoadGen or something else, library users will be affected. The users should not care what extension loading mechanism is used.
  2. Assume GLint is always int (likewise for other types) and just use native types in my classes.
  3. Create a minimal GL header which is distributed with the library that only defines these typedefs.

How do you like to handle this issue?

Strictly GLint etc are unique and should always be used - practically their typedefs have been unchanged for a long time and are unlikely to change. A wrapper class can always protect you if you are concerned, since you can, for example, pass an “int” and inside the wrapper convert this to GLint. While they are the same this is a no-op, and if they change, your code will still work unless, in the very unlike event, the number of bits for GLint is reduced.

The reason people use typedefs is if you compile your code on other platforms. For example int on 64bit platform could be 64bit. Instead of changing every instance of int in your code to a 32bit version, the typedef takes care of it.

No matter what kind of application you write, it isn’t a good idea to use int, long and such, specially if you want your data to be exactly a 16 bit integer or exactly a 32 bit integer and so on.
In the case of int and unsigned int, these tend to change depending on what platform you are on and this can be a problem, specially if you are using OpenGL.
It is best to either use the GL types or define your own types.

In the case of float and double, these are not likely to change, however, I still define my own types.
Oh yea, there was the good old long double (80 bit) on my old Borland C++ compiler. I guess this one died off.

Regal.h can be used with REGAL_NO_ENUM=1, REGAL_NO_TYPEDEF=1 and REGAL_NO_DECLARATION=1 to limit things to the typedefs.
It’s a big header, but it avoids pulling in windows.h and so on.
And it ought to remain interchangeable with gl.h, GLEW, etc.

https://github.com/p3/regal

  • Nigel