Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 5 of 5

Thread: The opengl typedefs

  1. #1
    Junior Member Newbie
    Join Date
    Apr 2011
    Location
    Ma
    Posts
    7

    The opengl typedefs

    What was the original reason for creating the opengl typedefs such as GLint and GLfloat?
    Is GLint always assumed to correspond to int, or is it really supposed to be an int32_t? Is it so that GPU primitive types can remain static in case cpu primitive types change for different architures (32 vs 64)?

    The reason I ask is because when designing a library that uses opengl internally, I'd like to design classes that wrap opengl objects such as shaders. If the class contains say a GLint to hold a shader object, that means anyone using my header has to #include the opengl header files as well to get the typedef. This could be problematic if I'm using glew or some other extension mechanism as it could cause name conflicts. glew chokes if you #include gl.h before it. Also it slows down compilation because these gl headers are pretty large and all you really want are some typedefs.

    So here are a couple possible solutions.
    1) #include glew or gl.h or whatever extension loading mechanism I'm using in all of my public library header files, exposing that implementation detail to library users. I don't like this at all because now if I switch from glew to glLoadGen or something else, library users will be affected. The users should not care what extension loading mechanism is used.
    2) Assume GLint is always int (likewise for other types) and just use native types in my classes.
    3) Create a minimal GL header which is distributed with the library that only defines these typedefs.

    How do you like to handle this issue?
    Last edited by fmatthew5876; 12-28-2012 at 10:58 AM.

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    Location
    Australia
    Posts
    1,106
    Strictly GLint etc are unique and should always be used - practically their typedefs have been unchanged for a long time and are unlikely to change. A wrapper class can always protect you if you are concerned, since you can, for example, pass an "int" and inside the wrapper convert this to GLint. While they are the same this is a no-op, and if they change, your code will still work unless, in the very unlike event, the number of bits for GLint is reduced.

  3. #3
    Junior Member Regular Contributor
    Join Date
    Dec 2007
    Posts
    249
    The reason people use typedefs is if you compile your code on other platforms. For example int on 64bit platform could be 64bit. Instead of changing every instance of int in your code to a 32bit version, the typedef takes care of it.

  4. #4
    Super Moderator OpenGL Guru
    Join Date
    Feb 2000
    Location
    Montreal, Canada
    Posts
    4,264
    No matter what kind of application you write, it isn't a good idea to use int, long and such, specially if you want your data to be exactly a 16 bit integer or exactly a 32 bit integer and so on.
    In the case of int and unsigned int, these tend to change depending on what platform you are on and this can be a problem, specially if you are using OpenGL.
    It is best to either use the GL types or define your own types.

    In the case of float and double, these are not likely to change, however, I still define my own types.
    Oh yea, there was the good old long double (80 bit) on my old Borland C++ compiler. I guess this one died off.
    ------------------------------
    Sig: http://glhlib.sourceforge.net
    an open source GLU replacement library. Much more modern than GLU.
    float matrix[16], inverse_matrix[16];
    glhLoadIdentityf2(matrix);
    glhTranslatef2(matrix, 0.0, 0.0, 5.0);
    glhRotateAboutXf2(matrix, angleInRadians);
    glhScalef2(matrix, 1.0, 1.0, -1.0);
    glhQuickInvertMatrixf2(matrix, inverse_matrix);
    glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
    glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

  5. #5
    Intern Contributor nigels's Avatar
    Join Date
    Apr 2000
    Location
    Texas, USA
    Posts
    87
    Regal.h can be used with REGAL_NO_ENUM=1, REGAL_NO_TYPEDEF=1 and REGAL_NO_DECLARATION=1 to limit things to the typedefs.
    It's a big header, but it avoids pulling in windows.h and so on.
    And it ought to remain interchangeable with gl.h, GLEW, etc.

    https://github.com/p3/regal

    - Nigel
    ---
    Regal - as OpenGL ought to be

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •