Scared of ARB's glext.h

I was just looking through the OpenGL updates on OS X Lion, when I
found something that now has me scared to use the glext.h found at
http://www.opengl.org/registry/

So, here’s the bug. Lion’s OpenGL.framework has a glext.h which has
the following definition:

typedef void *GLhandleARB;

But the glext.h from the OpenGL registry has the following instead:

typedef unsigned int GLhandleARB;

Now, the trouble is that when building for x86_64 on Lion we have
sizeof(void*)==8, but sizeof(unsigned int)==4. So what do you trust?
Lion’s header? Or the OpenGL registry’s header? Well, of course you
trust the system headers, because apparently they claim to know that
the ABI on 64-bit Lion has a 64-bit GLhandleARB type.

Now, this raises a few issues in my mind about various platforms:

(1) If you MUST use Apple’s glext.h, but Apple’s glext.h doesn’t have
provide access to anything later than OpenGL 2.1, then how do you get
at 3.0+ features on newer cards?

(2) Is it unsafe to use the OpenGL registry’s glext.h on Linux? Or
must you use the system’s glext.h there as well? In that case, question
#1 applies here as well

(3) How the heck do you handle things on Windows, where there is
never a glext.h on the system. You clearly can’t use a driver
vendor’s glext.h, because different vendors may disagree on the sizes
of various types. (Or is that not true?) What’s the deal here?

I didn’t know that int is 4 bytes on Mac OS 64 bits. This looks a bit weird, just like if they staid with their old big endian cpu :slight_smile:

Just use the OpenGL types. Don’t worry more about this, it will “always” remain safe as long as you don’t request more than the type size can handle.

Use the headers for your plateform !

That doesn’t answer even one of my 3 questions.

So what do you trust?

I trust the people not using GLhandleARB. If you’re still using the extension form of GLSL (the only code that uses GLhandleARB), then your code deserves to be broken. Or to put it another way, there’s no reason you should ever write the words “GLhandleARB” in your program. So there’s no problem.

If you MUST use Apple’s glext.h, but Apple’s glext.h doesn’t have provide access to anything later than OpenGL 2.1, then how do you get at 3.0+ features on newer cards?

… huh? Doesn’t Apple have documentation on how to access GL 3.2?

Is it unsafe to use the OpenGL registry’s glext.h on Linux? Or must you use the system’s glext.h there as well? In that case, question #1 applies here as well

I haven’t used any glext.h in ages. GLEW provides its own, as does any other extension loader.

How the heck do you handle things on Windows, where there is never a glext.h on the system. You clearly can’t use a driver vendor’s glext.h, because different vendors may disagree on the sizes of various types. (Or is that not true?) What’s the deal here?

See above.

The different types are defined by the .spec files, which are assembled into headers like glext.h by various means. Those headers have #defines and typedefs for different platforms. Everyone implementing OpenGL on that platform agrees to follow the types defined in the .spec files.

I didn’t know that int is 4 bytes on Mac OS 64 bits.

It’s 4 bytes under Visual Studio too, even when compiling for 64-bits. It turns out that a lot of code was written assuming that sizeof(int) == 4.

Ah, well; I wasn’t thinking of using GLhandleARB. I’m still new enough to OpenGL to hardly know what it is (I still have a very pre-shader mindset). I was just using that GLhandleARB example to illustrate the ABI glitch I noticed, assuming that such glitches would be present in other cases as well.

I take all your other points, but I have issue with one of them. You’re suggestion, if I read it correctly, is to “just use GELW, or some other loader.” But I don’t want to just blindly do that. If I myself can’t in principle figure out how to write a loader, then I’m not going to trust someone else to do it properly for me.

So I’m really intent on figuring out how all this works, and so my question about how things go with glext.h on Windows remains unanswered.

I suppose that on OS X one can say “just sue the system headers,” because at least they always come from the same vendor, Apple. But this still leaves state of things on Linux fuzzy and confusing. And even more so on Windows.

Still in need of real answers,
-Patrick

But I don’t want to just blindly do that. If I myself can’t in principle figure out how to write a loader, then I’m not going to trust someone else to do it properly for me.

That’s… not a reasonable position to take.

You trust your CPU vendor to provide you with a functioning CPU, even though you aren’t a semiconductor engineer, yes? You’re likely not an OS engineer, yet the fact that you’re able to type a response says that you’re relying on an OS. Yet you draw the line at this?

The details are irrelevant to anyone who is not actually writing an OpenGL loading library. Leave special things up to the specialists, and be grateful that you even have OpenGL loading libraries today.

Still in need of real answers,

Disregarding an answer doesn’t make it not “real”.

As I said, “The different types are defined by the .spec files, which are assembled into headers like glext.h by various means. Those headers have #defines and typedefs for different platforms. Everyone implementing OpenGL on that platform agrees to follow the types defined in the .spec files.”

That is where it comes from. If you want the details, look in the .spec files. It doesn’t get much more “real” than that.

Ooops. My mistake. Didn’t know why I thought 4 bytes is 16 bits… Yes int is 32 bits on most plateforms, even on most 64 bits architectures.

@Alfonse

My mistake, somehow my eyes just didn’t scan that line of yours that said: “Everyone implementing OpenGL on that platform agrees to follow the types defined in the .spec files.”

That is indeed a real answer. It seems that Apple broke from that mold when they went off and defined their types differently in Lion’s gltypes.h, but I suppose that’s forgivable since they control the platform pretty thoroughly anyway.

And I disagree on that other matter, mine is a reasonable position to take. It’s a very reasonable position, and here’s why:

I’m not a semiconductor engineer, but I could in principle learn the details and become one.

I’m not an OS engineer, but I could in principal learn the details and become one.

Yet when I try to dig into the details of OpenGL loaders the lack of clear documentation along with comments from people in the OpenGL community all keep suggesting the idea that “This is black magic, don’t go into this, YOU CAN’T HOPE DO IT RIGHT EVEN IN PRINCIPAL; AND WE DID IT RIGHT FOR YOU, BUT WE DON’T CARE TO TELL YOU HOW, EVER” And that’s what really bothers me, and it’s a perfectly reasonable thing to be bothered by.

I looked into the glew and glee source code a bit, and as far as I can tell from my quick glance they don’t really explain how or why they do what they’re doing, they just sort of do it. I don’t see any design documentation in the source tree, and the code lacks good comments.

If someone (be it a person or a community) is that reluctant to spill the beans on how something is designed and implemented, then I’m going to start to suspect that they perhaps don’t really know what they’re doing to begin with. That’s not to imply that I do know what I’m doing, but it’s a good reason to want to figure it out.

-Patrick

Ask the makers of GLEW or GLEE. It is their job to document their code.

As for Windows, I download glext.h from the registry. I assume it works on Linux as well but I haven’t done much on Linux.
I use GLEW. I suppose I no longer need glext.h.

As for objects in OpenGL (texture, FBO, shader object), they all tend to be unsigned 32 bit integers. Why would you want a 64 bit integer? But the bottom line is that it is Apple’s decision. If they want to define their GLuint as 64 bit or their GLHandleARB as 64 bit, then I suppose someone’s code is going to get messed up if they are using “uint” or “int”.