Why should i use GLfloat instead of float ?

Hi,

OpenGL has several type like GLfloat, GLint, …

Is there a good reason to use it instead of float, int, … ?

Why ?

From the spec (Table 2.2):

GL types are not C types. Thus, for example, GL
type int is referred to as GLint outside this document, and is not necessarily
equivalent to the C type int. An implementation may use more bits than the
number indicated in the table to represent a GL type. Correct interpretation of
integer values outside the minimum range is not required, however.
ptrbits is the number of bits required to represent a pointer type; in other words,
types intptr and sizeiptr must be sufficiently large as to store any address.
So you should stick with GL types.

The reason that OpenGL has those types is to tell implementors of the specification the minimum number of bits necessary to properly represent the value (see table 2-2 in the spec). Now, if you argue for type safety, then you should use the OpenGL type all the time. It also ensures that should you ever have to compile your program on another platform, then you know for sure that the type will be the appropriate one and would not have changed.

For example, what happens if you wrote your programs with ints? Normally on a PC that means 32-bits, but maybe you’re porting to another platform and the C compiler thinks that an int is 16-bits–well then, you have a problem.

I think it is wise to use them mainly for ensuring that no matter what platform or compiler you use, you are using the right types–especially since you may be working with large chunks of memory of this data and you don’t want exceptions occuring due to memory use.

Be that as it may, if you don’t care about the cross-platform stuff and you know that the GLint for your implementation is just a “typedef int GLint;” then I would say, you’re most likely ok.

–microwerx

On Mac OS X, GLint/GLuint are typedefs off long and unsigned long, so lots of code written for Windows/Linux where they’re typedefs off int/unsigned int actually doesn’t compile.

Bottom line – don’t assume that the type with the GL on the front and the type without are the same.

Ok. Thanks.

Your compiler might consider int to be 32 bit while long to be 64 bit. For some compilers, long is 32 bit.

I think that on Windows 32 bit and 64 bit, GLuint and GLint is 32 bit. 64 bit integers is rather useless for GL because GPU don’t yet handle 64 bit integers. I’m sure someone will chime in with a counter example now :slight_smile:

On Mac OSX a long is 64 bit, so I think OneSadCookie is saying that their GL implementation is using them.
Reference :
http://developer.apple.com/macosx/64bit.html