Difference between GLfloat and float?

What’s the difference between GLfloat and float?

I also have problem understanding the following line of code:

GLubyte yellow[] = {255, 255, 0, 255};

What is this line of code for, and why is yellow an array?

typedef float GLfloat;
typedef something GLsomething;

yellow = rgb(255,255,0)

the ending 255 means “full color non translucent”

it is 32-bit color.
it is array because one byte is 8 bit you need 4 bytes to store it

red green blue alpha
255 , 255 , 0 , 255

GLfloat is for cross-platform (to prevent conflicts with platforms)
in windows,unix … they are totally same

Because of diffrences between some systems in which a float could be 16, 32 or 64 bit.
So when gl.h defines GLfloat, it defines it a set size that does not change per system. This helps keeps openGL crossplatform compilant.

openGL uses RGB values to define a color.
(Red, Green, Blue), also your can add a transparency value to a color also, which is the fourth value.

GLubyte yellow = {255, 255, 0, 255}; // Define the RGB values for the color yellow with no transparency.

GLubyte white = {255, 255, 255, 255};

GLubyte reb = {255, 0, 0, 255};

The reason we do this is to cut down on code and having to remember numbers.

Both of the folowing does the same thing, set’s current color to draw with.

glColor__( 255, 255, 0, 255);
or
glColor__( yellow );

Just easy to remember yellow then a string of numbers, and less typing.

Originally posted by sam02:
[b]What’s the difference between GLfloat and float?

I also have problem understanding the following line of code:

GLubyte yellow = {255, 255, 0, 255};

What is this line of code for, and why is yellow an array?[/b]

I would add that using GLfloat (or GLwhatever) can make easier the reading of your code,because of the prefix GL and the usage you make of the variable… GL variable for a GL usage…

So when gl.h defines GLfloat, it defines it a set size that does not change per system.

As with most, if not all features in OpenGL, the spec defines a lower limit, but not an upper. Same with the GL data types. Table 2.2 in the OpenGL 1.3 specification states the minimum length of GLfloat to 32 bit, but also says that an implementation may use more bits.

So GL-datatypes can make portability easier, yes. But portable? Don’t be so sure about that.

You can avoid the size of byte changes from platform to platform by simply specifying it manually.

float a:4;
the :4 represents four bytes…

Originally posted by mancha:
[b]You can avoid the size of byte changes from platform to platform by simply specifying it manually.

float a:4;
the :4 represents four bytes…[/b]

Huh? What language are you working with?

Test.cpp:

void main()
{
float a:4;
}

Cygwin:
$ g++ -o test test.cpp
test.cpp: In function int main(...)': test.cpp:3: parse error before :’

VC++:
Compiling…
test.cpp
f:\cygwin\home\administrator est.cpp(3) : error C2601: ‘a’ : local function definitions are illegal
f:\cygwin\home\administrator est.cpp(3) : error C2063: ‘a’ : not a function
f:\cygwin\home\administrator est.cpp(3) : error C2969: syntax error : ‘;’ : expected member function definition to end with ‘}’
Error executing cl.exe.

test.exe - 3 error(s), 0 warning(s)

That’s legal syntax for bit fields, but AFAIK you can only use ‘unsigned’ or ‘int’ types (not float) with them, and the number specifies the number of BITS, not the number of BYTES.

-Mezz

Originally posted by Mezz:
[b]That’s legal syntax for bit fields, but AFAIK you can only use ‘unsigned’ or ‘int’ types (not float) with them, and the number specifies the number of BITS, not the number of BYTES.

-Mezz[/b]

Ummm… yeah… changed float a:4; to unsigned int a:32;. Same errors. Not sure where you guys are reading this stuff, but I’ve never heard of that in any ANSI standard stuff I’ve ever seen. Maybe Delphi allows that, but for C/C++ most compilers usually have types like __int32, etc. That’s not even an ANSI standard, though.

Originally posted by Mezz:
[b]That’s legal syntax for bit fields, but AFAIK you can only use ‘unsigned’ or ‘int’ types (not float) with them, and the number specifies the number of BITS, not the number of BYTES.

-Mezz[/b]

That is right, you do it with ints. You also do the declaration in struct, classes…
I dont remember well, 'cause I have not used it in a LOOONG time… But it goes by bytes, not bits; I might be wrong though… So you would declare it as int a:4; in your struct or class…

There are functions to manipulate your EPSILON, but I forgot.

[This message has been edited by mancha (edited 07-25-2002).]

[This message has been edited by mancha (edited 07-25-2002).]

Ahhh… yes… I seem to vaguely recall the bitmask thingy. Not really meant for making your own custom size types, though. A quick test shows that different compiler’s handles that differently too.

// Make sure struct is tightly packed
#pragma pack(1)
struct Blah
{
int a:8;
};

sizeof(Blah); // 4 with VC++ 1 with Cygwin.

If I recall, that is mainly meant for something like so…

#pragma pack(1)
struct Blah
{
int highWord:16;
int lowWord:16;
};

sizeof(Blah); // VC++ & Cygwin both == 4

A quick search on Google revealed that the ANSI standard doesn’t actually specify how the memory has to be used, so that there is the possibility for padding as seen in the first example of struct Blah.

Anyway, I thought that was interesting. Always fun to learn new things about my favorite language just as I think I know it all…

I’ll stop rambling now.

Edit:
A couple other quick notes.

  1. If you try and do

struct Blah
{
int a:33;
};

Both Cygwin and VC++ give an error saying that the type is too small.

  1. Just to get this back to the original topic. It was my understanding that the GL header would define GLfloat to be a 32-bit float based on whatever OS that header was meant for. So say hypothetically that you are on some platform where a float is 64 bits, but the compiler also has a __float32 type that can be used. In that case the GL.h for that particular platform would be:

typedef __float32 GLfloat;

Ok… NOW I’ll stop rambling.

[This message has been edited by Deiussum (edited 07-25-2002).]

Yeah, I did look in my C book before I posted that, I forgot they had to be in a struct though

Of course, I don’t think I’ve ever actually had cause to use them anywhere ever, but I did see them in “Game Programming Gems” once…

-Mezz

I have to add that I’ve written modules using `float’ and then made the assumption that those variables are GLfloat when they hit the rendering pipeline. This is BAD.

See my post on templatization - Makes things slightly easier.

It was my understanding that the GL header would define GLfloat to be a 32-bit float based on whatever OS that header was meant for. So say hypothetically that you are on some platform where a float is 64 bits, but the compiler also has a __float32 type that can be used. In that case the GL.h for that particular platform would be:

typedef __float32 GLfloat;

I wouldn’t be so sure about that. The float would be as valid to represent GLfloat as __float32. Both float and __float32 meet the requirement of the spec; 32 bits or longer.

hi everybody,

visual C++ gives 4 (while cygwin gives 1)
because “#pragma pack(1)” means nothing for Visual C++
i dont know its equvalent…
and it cant be bigger than 32 because you choose int (int is 32-bit
you cant exceed its max size…

struct s1
{
char ch;
int n1;
int n2;
};

#pragma pack(1)
// You can also use #pragma pack(push, 1)
// then #pragam pack(pop) at the end of the struct
// The push/pop doesn’t work for GNU compilers, though
struct s2
{
char ch;
int n1;
int n2;
};

Becuase sizeof returned 4 for VC++, that shows that VC++ was using the full size of the int, not just a single bit. Also, I know what the reason was for the 32-bit error.

I was merely pointing out that using the bitfield thing doesn’t allow you to set fields to whatever size you want, as was previously implied.

[This message has been edited by Deiussum (edited 07-31-2002).]

Packed bit fields only work on unsigned data types. Generally they are quite slow compared to using bitmasks with the & and | operators. They will always use up memory to the next byte boundary, ie, specifying 9 bits would give you a 2byte data element. I’ve only ever found a use for them when reading/writing compressed binary files. You can do some fairly pointless things with them and unions though :

struct bit
{
bit()
{
b = 0x01;
}
union
{
struct
{
unsigned char b0:1;
unsigned char b1:1;
unsigned char b2:1;
unsigned char b3:1;
unsigned char b4:1;
unsigned char b5:1;
unsigned char b6:1;
unsigned char b7:1;
};
unsigned char b;
};
};

Fairly Pointless I think you’ll agree.