int, float VS. GLint, GLfloat

I wonder why I should use GLint and GLfloat instead of int and float ?
Is there any advantage ?
can someone enlighten me ?

thanks a lot !


Evil-Dog
Sleep is a waste of time

Originally posted by Evil-Dog:
I wonder why I should use GLint and GLfloat instead of int and float ?

On the off chance int and float aren’t the same thing as GLint and GLfloat. Your compiler might use 16 or 64 bit ints, for example. But no matter what compiler you use, GLint and GLfloat will always be the right size.

Use GLint and GLfloat to avoid surprises down the road.

from GL/gl.h
typedef unsigned int GLenum;
typedef unsigned char GLboolean;
typedef unsigned int GLbitfield;
typedef signed char GLbyte;
typedef short GLshort;
typedef int GLint;
typedef int GLsizei;
typedef unsigned char GLubyte;
typedef unsigned short GLushort;
typedef unsigned int GLuint;
typedef float GLfloat;
typedef float GLclampf;
typedef double GLdouble;
typedef double GLclampd;
typedef void GLvoid;

it’s the same, they’re just typedefs

It’s the same on YOUR system TODAY. The point is on other systems, or perhaps down the road (if you use a 64 bit compiler & an itanium for example), the code will still work and you’ll send the correct sized types to OpenGL.

C does not define the size of things like ints or doubles etc. sizeof(int) can change from system to system.

The GL specific types does not guarantee any fixed size. They guarantee a minimum size. A GLint must be at least 32 bits. In other words, a GLint can be 64 bit long, but not 16 bit. So even sizeof(GLint) can change from system to system.

So if you have a 64 bit compiler for an 64 bit processor, where an int is 64 bits, a “typedef int GLint” is perfectly valid.

Thanks for the replies.

So If I understand, it’s better to use GLint or GLfloat, etc to be sure that it works with openGL when you’re on a different system ?


Evil-Dog
Sleep is a waste of time

personnally I don’t like the whole GLint, GLfloat, etc… idea
They are rather confusing, and even when you do use them it’s better to know what’s their exact size anyway.

I generally use them when I know they’ll be directly used by OpenGL commands. Otherwise if they are also used by non-OpenGL functions I use C++ types, C++ will make the type converstion when it’s needed (assignement or as parameters in functions).

Under Visual C++ 6.0 and 7.0 for x86-32 and IA-64, int/float/GLint/etc… all have the same size.
I think MS won’t change the size and will rather define new types if needed (eg: __int64), so their new compilers don’t broke old apps. I’m not sure what’s the position of others on that problem.

IMO, types sizes is more like a past problem of C.

Also, things get crazier when u use MMX/SSE(2)/3DNow!/etc… 'cause those have hardware fixed sizes that will absolutly never change.

Changing the type sizes would be desastrous for apps using those: the current C++ type sizes not matching anymore with the size of those extra registers.

example:

int MMXArray[2];
/* the app thinks it’s dealing with 2*32bits integers */

__asm {
movq mm0, MMXArray
/* Game over! The compiler use int as a 64bits, only one integer was loaded into the MMX register, not two… the results will be false. */
}

>>>int MMXArray[2];
/* the app thinks it’s dealing with 2*32bits integers */

__asm {
movq mm0, MMXArray
/* Game over! The compiler use int as a 64bits, only one integer was loaded into the MMX register, not two… the results will be false. */
}
<<<

Not in my experience. Not with VC++.

Do you know what movq does?

V-man

I was giving an example of a code made under VC++ 6.0/7.0 that would be compiled on a compiler changing the type size (in that example, using 64bits int instead of 32bits).

To explain why changing type size, C++ or OpenGL one, isn’t such a good idea.

Sorry about that, but what you said was confusing

“only one integer was loaded into the MMX register”

so you meant “one integer” is a 64 bit int and not a 32 bit int.

As long as one knows how to code in inline assembly, this is not a problem. Go ahead and declare a
int array[32]
and use it as a single 64 bit integer.

V-man

There would be a problem if a binary file contains 300 32-bit intergers. Suspose you now have a new compiler on a 64-bit machine with the source code from the original program. When you read in the so called 300 32-bit integers from the original binary file with the newly compiled code on the new system, you will be 150 intergers too short, since the integers are now 64 bits long;

Wouln’t you think that would cause a HUGE problem if that program was running a life support system.

Whoever said size does not matter, did not think of the nature of PC’s.

V-Man

A 64bit int is not 32 integers. It is only 8 bytes, not 64 bytes.