GLboolean

Why is GLboolean an unsigned char?

I thought booleans were supposed to be faster, as logically they take less memory. Anyway I’ve talked to a friend and he said the issue of them on X86 processors could be due to how fpus(the processors which accelerates (integers & float) calculations. But are there any official answers on this?

Hou la la.
Indeed it is possible to store a boolean on a single bit.

However, this is not directly accessible by any past or present computer system I am aware of. Memory addresses are in bytes (same as unsigned char).
So it would be even slower to work with bits even if it would take less memory.

Nothing to do with x86 in particular.
Absolutely nothing to do with fpu.

OpenGL does not need to store massive numbers of bits.
When it is the case, yes there are packed tightly to take less space. Search for the phrase “stored as a bit vector” here :
http://www.fabiensanglard.net/quakeSource/quakeSourceRendition.php
In that particular case, RLE compression was even needed to reduce the dataset from several megabytes down to dozen of kilobytes (when having lots of ram at the time meant 16MB, not 8GB like currently).

Memory saving does not always equate to more speed, that’s just part of the “bloatware” myth. Consider the case of how you would multiply a 24-bit number by 17. Then consider how you would multiply a 32-bit number by 17. Now consider how you would multiply a million such numbers by 17, and which would be faster. Consider a database that only holds 4k of data in memory at any one time compared to one which holds 512 MB. Which is faster?

You see this all over the place. Memory allocations rounded to cache lines, vertex structures padded to a certain size, memory allocations made larger than they are needed to provide emergency headroom.

The moral of the story is: always consider what the trade-offs might be when it comes to memory savings.

Zbuffer can you recommended a valid academic reference for why open GL uses unsigned chars? (this is if you know any official Open GL papers)

I don’t think you need an academic reference. GLboolean is just a typedef, same as any other OpenGL type. Because OpenGL is intended to be portable across multiple programming languages and platforms it needs to use types that are actually avaialable in these programming languages and on these platforms.

Is there a native C data type that treats booleans as bit vectors? C doesn’t even have a boolean data type. A native Java data type? A native Python data type?

So OpenGL needs to use one of the native data types, and it’s merely by common coincidence that it happens to be unsigned char on most platforms. Note that the representation of GLboolean as unsigned char is not necessarily mandated by OpenGL itself. The OpenGL spec only specifies a minimum number of bits for each data type and clearly states that OpenGL types are not C types. (Working from 2.1 spec available here.)

So if a hypothetical platform which supported OpenGL had no data types less than 128 bits then GLboolean would be typedef’ed as a 128 bit type on that platform. If another hypothetical platform actually did support 1 bit types then GLboolean could be a 1 bit type on that platform.

That’s why you frequently see recommendations to use the GL* types rather than C (or whatever) types in OpenGL code.