Extension library comparisions

Hi,

is anyone aware of a descent review of the various OpenGL extension libraries around (e.g. GLEW, GLee, eXtparse, gluX, OglExt, GLLoader)? The point is, I have very specific requests to an extension library that seem not to be fulfilled by any of the above. Most feature requests arise from the fact to use the extension library with existing OpenGL code, not newly written code:

  • It should be able to parse the extension definition .TXT files on SGI’s, NVIDIA’s and ATI’s site (not the header files on SGI’s site) and generate C code from them.

  • There should be an easy to use option to mask unused extensions, so only the code to initialize used extensions gets compiled in.

  • If an old OpenGL core version is reported, and an extension is present that was promoted to the core in a later version, both the extension-mangled name and the core-promoted name for the extension function should be available. This avoids rewriting large amounts of (badly) written code that rely on a particular extension being available in the core.

  • A custom callback when calling NULL / uninitialized extension function pointers should be setable.

If no one is aware of an extension library with these features, I might start a project on SourceForge.

  • It should be able to parse the extension definition .TXT files on SGI’s, NVIDIA’s and ATI’s site (not the header files on SGI’s site) and generate C code from them
    Why this feature? I think that TXT files are not well formated for “parsing and C code generating”.
    Header file (glext, wglext) are easy to parse.
  • There should be an easy to use option to mask unused extensions, so only the code to initialize used extensions gets compiled in.

Nice feature, but I think it’s a useless. Do you expect any speedup in startup?

  • If an old OpenGL core version is reported, and an extension is present that was promoted to the core in a later version, both the extension-mangled name and the core-promoted name for the extension function should be available. This avoids rewriting large amounts of (badly) written code that rely on a particular extension being available in the core.
    I didn’t understand this. Even if driver report OpenGL version, on Windows you are still stuck on GL 1.1. IE… you must map all entry point. Even more, some drivers reports GL2 even if underlaying hw is not capable to do GL2 things.
  • A custom callback when calling NULL / uninitialized extension function pointers should be setable.
    This is good suggestion… it will help hunting bugs.

I have my own extension loading library. I write small utility that can parse glext.h and wglext.h and generate C code. Generated code try to map all known entry points and set flag for each avaible extension from list of known extensions. Checking for presence of some extension and then mapping it’s entry points is bad approach, because some functions entry points can be uninitialised.

yooyo

Why this feature? I think that TXT files are not well formated for “parsing and C code generating”.
Header file (glext, wglext) are easy to parse.
These header files are not as up-to-date as extension specifications. How long did it take between the first implementations of FBOs and the first extension header (official ones from an IHV, not something someone put together on the Net)?

Granted, these days, the rate of extension creation has slowed to the point where the need for parsing the raw spec is not really necessary to gain access to the extension in drivers (unless you absolutely need it right now).

If an old OpenGL core version is reported, and an extension is present that was promoted to the core in a later version, both the extension-mangled name and the core-promoted name for the extension function should be available. This avoids rewriting large amounts of (badly) written code that rely on a particular extension being available in the core.
This is a bad idea. Many extensions change (in the case of glslang, drastically) when promoted into the core or to ARB status. It’s important that the pre-core and the core versions are retained as seperate. You wouldn’t want to think you’re calling the core version expecting one kind of behavior when actually you’re calling the pre-core version and getting different behavior.

Why this feature? I think that TXT files are not well formated for “parsing and C code generating”.
See Korval’s explanation. Of course the TXT files are not as easy to parse as the header files, but at least they are sort of standardized in their structure.

Nice feature, but I think it’s a useless. Do you expect any speedup in startup?
It’s not about speed, but about code size. I sometimes want to develop OpenGL applications that are e.g. less than 64 kb in size :slight_smile:

I didn’t understand this. Even if driver report OpenGL version, on Windows you are still stuck on GL 1.1.
Imagine the reported OpenGL version is 1.1, and the 3D texturing extension is present. 3D texturing was promoted to the core not until OpenGL 1.2. This means you will have entry points named like glTexImage3DEXT(), not glTexImage3D(). But I want to have glTexImage3D() if 3D texturing is only available as an extension.

However, I see Korval’s point why this might be a bad idea for some extensions.

Originally posted by eyebex:
Imagine the reported OpenGL version is 1.1, and the 3D texturing extension is present. 3D texturing was promoted to the core not until OpenGL 1.2. This means you will have entry points named like glTexImage3DEXT(), not glTexImage3D(). But I want to have glTexImage3D() if 3D texturing is only available as an extension.
Perhaps I misunderstand the feature promotion mechanism in OpenGL, but are there any guarantees that glTexImage3DEXT will behave in the same way as glTexImage3D? Or even that the API remains consistent throughout promotion?

This is an oddity I’ve been curious about for a while, but having read anything solid on the matter.

There are no general guarantees. In fact it’s not so uncommon that a core feature differs from the oroginal extension.

But some extensions, one example being 3d textures, were promoted without change. But this can’t automatically detected by an extension loader. For example there is no API difference between the EXT and the ARB rectangle extensions, but there’s a semantic difference. Someone would have to manually give it a kind of table that identifies identical extensions and core features.

Originally posted by Overmind:
For example there is no API difference between the EXT and the ARB rectangle extensions, but there’s a semantic difference.
What would that be?

ingorant

Personally, I find that adding an extension to my library is really quick and really easy to do by hand, as and when I find the need to use that extension. It’s approximately 5 minutes of copy/pasting (except in the case of the vertex program/shader language extensions, but they were very unusual in their size).
But if you want to play around with file parsing and what-have-you, that’s your decision - personally, I prefer writing graphics code.

[quote]For example there is no API difference between the EXT and the ARB rectangle extensions, but there’s a semantic difference.
What would that be?[/QUOTE]The EXT extension requires texture coordinates to be specified in non-normalized [0…w]x[0…h] range, whereas the ARB extension uses normalized [0…1]x[0…1] coordinates like “normal” power-of-two (POT) textures.

EDIT: The above is not correct, all EXT/NV/ARB texture extension with “rectangle” in their name use non-normalized texture coordinates. I guess Overmind was refering to the semantic difference between EXT_texture_rectangle and ARB_texture_non_power_of_two.

EXT_texture_rectangle is the same (i.e. semantically equal) as NV_texture_rectangle, by the way. Does anyone know why the first is not listed in SGI’s OpenGL Extension Registry, whereas the latter is?

Moreover, all the three extensions above (NV_texture_rectangle, EXT_texture_rectangle and ARB_texture_rectangle) introduce new texture targets, i.e. a rectangle texture is not just a 2D texture with NPOT dimensions, but a whole different texture type with different tokens.
Then there also is ARB_non_power_of_two, which on the other hand extends the definition of the existing texture types to be NPOT, too, and does not introduce any new types / tokens.

So in total, we have four extension dealing with NPOT textures.

>>The EXT extension requires texture coordinates to be specified in non-normalized [0…w]x[0…h] range, whereas the ARB extension uses normalized [0…1]x[0…1] coordinates like “normal” power-of-two (POT) textures.<<

No way!
The texture targets are the same enum. All texture rectangle extensions use [0…w]x[0…h] unnormalized texture coordinates.
The ARB_texture_non_power_of_two extension resp. OpenGL 2.0 core NPOT texture use normalized coordinates since they extended the GL_TEXTURE_2D/3D target.

Sorry, Relic is right, I’ve edited my post.