Extension loading mechanism

I’m just qurios how to solve this problem better. So far I’ve been parsing extension string, getting pointer adresses & then checking them against NULL. But as far as I know some extensions in some driver revisions are absent in ext string, though returns valid function pointers via wglgetprocaddr are fully functional.
I have few apps that I have to hand over & I’m thinking about different ext loading routine.
I could just call getprocaddr for all the function pointers & check them against NULL without parsing EXT string. If ext pointer != NULL go ahead, else swear or crash with flames… :rolleyes:
And you can allways blame user for using old drivers :very evil:

That is an acceptable approach. If you get a pointer you can call the function, just make sure you call it with the context you intend to use the function with active. It is absolutely critical that you have the valid context active.

I remember reading that in at least one case NVIDIA won’t publish/advertised extensions although they are implemented (I took this to mean excluding extension strings from the driver) the example I have in mind is ARB_env_crossbar, but the reason offered for this seemed trivial, in this particular case a difference in behavior with bad texture parameters (white vs undefined results). I may have been wrong in my interpretetion of the phrasing used in the note I read, but it seemed that extension strings might be omitted despite being ostensibly supported if the functionality wasn’t entirely conformant with the spec.

I would be cautious doing this. Nvidia have been known to implement entry points for “work in progress” extensions.

BTW: I think that ARB_tex_env_crossbar issue was resolved when it was integrated into the core. (Although the extension is still not displayed)

NO NO NO NO!!! If it’s not in the extension string or “implied” by the version string, do not use the functionality. Period. While just relying on wglGetProcAddress returning non-NULL may happen to work on some drivers, it’s not guaranteed to always work. Not only that, it will not port to other platforms. For example, glXGetProcAddress never returns NULL on Linux. So, you could do ‘glXGetProcAddress(“glFooBarStinkyPants”)’ and get a value, but you’d better not call that pointer…

There is a published, supported way to test for extension support. Is it really worth potential pain in the future to be lazy and hack around it now? App developers doing pointless, lazy crap like this makes users think OpenGL is garbage.

Please read a good description of how to properly use the OpenGL extension mechanism, or please consider using of the of the many freely available , extension loading , libraries . If you don’t like any of those, I’m sure you can easilly find some more…

It just seem to be overkill to do double check. What’s even more important, your app could skip extension in case something like ext string trim is on (in NV control panel for example). Linux is not in question for given app, though it’s a good point you mentioned.

When do we get that glIsExtensionSupported() stuff, seems to be easy to implement even on Diamond or S3 Virge???

[PS]
As far as I know (via Opengl ext viewer) env_crossbar is in core since 1.4. And it’s absent in few latest drivers for FX5900, though I remember that it was in ext string some time ago. Is it there under X?

The NV_texture_env_combine4 extension provides nearly identical functionality to functionality that the ARB_texture_env_crossbar extension provides.
Unfortunately, the ARB_texture_env_crossbar’s semantic for what happens when a texture environment stage references a disabled texture does not match NVIDIA’s NV_texture_env_combine behavior.
Due to the differing semantics and in order to maintain backward application compatibility and compatibility with the NV_texture_env_combine4 specification, NVIDIA will never advertise the ARB_texture_env_crossbar extension.
The ARB_texture_env_combine semantic is:
Texture blending should be disabled on the texture unit that is referencing the invalid or disabled texture.
The NV_texture_env_combine4 semantic is:
If the <n>th texture unit is disabled, the value of each component is 1.
Fortunately, this semantic is not particularly relevant for most applications because applications typically avoid sourcing a disabled, inconsistent, or invalid texture unit.
NVIDIA recommend that if your application sources other texture units using the GL_COMBINE_ARB texture envionment mode, you first determine that either ARB_texture_env_crossbar or NV_texture_env_combine4 are supported. Then do not assume a particular behavior when sourcing other texture units with GL_COMBINE_ARB environment that are disabled or invalid.
OpenGL 1.4 codifies this practice by integrating the ARB_texture_env_crossbar functionality into the core OpenGL standard.
The OpenGL 1.4 standard says: “If the texture environment for a given enabled texture unit references a disabled texture unit, or an invalid or incomplete texture that is bound to another unit, then the result of texture blending are undefined.”

Originally posted by M/\dm/
:
When do we get that glIsExtensionSupported() stuff, seems to be easy to implement even on Diamond or S3 Virge???
I also would like to know it. Not in the sense I’m interested - my app does not use a fixed buffer and I don’t see a good reason to upgrade to the extension (unless the extension string gets ‘deprecated’ but I guess this won’t happen) - but I wonder why it’s still not present in the drivers.

Maybe the fact that you have to check the extension string to see if this is supported holds implementors back.
In fact this severely limitates extension usefulness and usability.

Originally posted by Obli:
In fact this severely limitates extension usefulness and usability.
glIsExtensionSupported()? Oh puh-lease! We’re talking about 25 lines of code here :rolleyes:

– Tom

The IsExtensionSupported stuff was considered for 2.0, but was dropped. It was dropped because it won’t help existing apps that are already broken (and there are lots of them), and new apps can just use one of the numerous extension loading libraries, look at how GLUT does it, etc. The decision was to help people not write buggy code instead of putting a band-aid on the API. That’s what I’m trying to do right now. :slight_smile:

Madman: If you consider writing applications that actually conform to the APIs you are using “overkill”, then I really feel sorry for the people using your apps. That may sound harsh, but if you’re too lazy to check the frickin’ extension string…sheesh!

It’s not that I am that lazy, but it just so annoying to add all the frickin stuff of ext loading to every project I make. Especially it gets annoying in case of short demo programs. I don’t like the way nvidia is demonstrating cool stuff, because all of the key elements are in some shared/util/stuff files/dirs & it takes quite some time to get to idea. Best example - pbuffer demo in SDK - if you open main file you’ll notice that all pbuffer initialization & stuff is somewhere in /shared/utils or smthn.
Well I have written one function that doesn’t use fixed buffer & is easy to call ( checkExt(“glStuff”) ; ) , but it is annoying to copy it to every freakin demo. Even more if I can bypass this $hit & it has worked fine so far.
From my point of view dropping IsExtension supported is realy dumb idea. It could saved some copy/paste time & a lot of extra explanation to people that are looking at the main idea of your code. But nop, well just keep explaining beginners not to use that and this and reserve sapce for that because that is so. IsExt approach could have even saved extra WGLext stuff.
Concerning libraries, they are usually bloated & I prefer using clean GL, where I am responsible for all the stuff going around, moreover I don’t want to go all the licencing process in case of commercial app.
Concerning backward compitability I just don’t see why it should be any issue. Apps like Doom3 can easily call glIsExtSupp, because all the hw that supports shaders is supported with drivers enough to have this function. S3 wouldn’t run that stuff anyways. On the other hand if you are writing frickin balls w lighting, then it’s up to you to think that someone with Microsoft generic renderer could run the code (and I guess you are aiming at that audience there too), though if you are writing on new hardware ball tesselation will kill performance on old HW fast enough, so you will have to test on peace o’ crap anyway to spot all the errors & FPS there. And string doesn’t need to be removed, so old apps should be fine.
All we need is one extra comment: “if you intent to use this f/n remember that peace o’ crap might not suport it”

It not that much work to check this stuff everytime, but it is annoying for me & i think glIsExtSupported could have made it better. And after 2 years it would be ideal way to check stuff.

That is my point of view.

This situation is slightly ridiculous (crossbar on NVIDIA anyone?), geeze.

P.S. and idr it’s not about laziness, you think he doesn’t check strings now? You think it takes any real work to do this, even if it did there’s scores of free examples you could cut 'n paste.

Some engineers care about their code and instinctively hate redundancy & code like this. Objections have nothing to do with the actual labor involved. Even if you find the string you have to also check the pointer to be safe even if it should be there according to the string.

Concerning libraries, they are usually bloated & I prefer using clean GL, where I am responsible for all the stuff going around
So, let me make sure I understand this fully. You don’t want to use external libraries, but you’re willing to use OpenGL (which, btw, is a library). Why do you choose some libraries and not others? And, more importantly, why are you complaining about a problem that others have already solved, simply because you don’t like their solution?

Oh, and the whole “IsExtensionSupported” nonsense is an extension, so you’d still have to querry it. It is, also, a very simple piece of code that can be written in 5 minutes by any competent programmer, and there are a number of libraries out there with said code in them.

Complaining about a solved problem just because you choose not to use the solution is silly.

moreover I don’t want to go all the licencing process in case of commercial app.
Then pick a library that doesn’t have a license of any significance.

The “isextensionsupported” querry is unquestionably redundant, I can’t believe this was seriously proposed by anyone especially considering it’d take a querry to use it. It’s 3 lines of code and I doubt it would take 5 minutes if you knew about strstr. There is a serious point about under the table functions to be made here.

Portability is a secondary issue, the windows spec says that wglGetProcAddress should return NULL when it does not succeed, moreover functionality deliberately exists in some implementations that is not supported in the strings, so you can bash madman all you like but he has a point.

I can call wglgetprocaddress and an implementation can return a valid function pointer (instead of NULL which is the correct return for failure) and the folks who implement this and explicitly exported that entry point say “don’t call that function it’s Lazy sloppy code”. Well it might be lazy & sloppy code but it’s not the developer that’s being lazy and sloppy especially when they know darned well that some of them work and are exported for a reason 'nuff said.

Several of the popular extension loaders use a BSD-like license which allows royalty-free use in commercial apps.

Given the ready availability of extension string testing code, if a person can’t be bothered to write:

GLfloat version = atof( glGetString( GL_VERSION ) );

if ( glutIsExtensionSupported( (GLubyte *) "GL_ARB_texture_env_crossbar" )
     

Then what do you call it if not lazy? :confused: After all, some extensions don’t add any functions, so you need that code around anyway! How else do you determine if you can use GL_ARB_texture_non_power_of_two, GL_NV_texture_rectangle, GL_ATI_texture_env_combine3, GL_ARB_texture_env_dot3, etc., etc., etc…

Or just use this:

char *isExtensionSupported(const char *extstring)
{
    static char *str = NULL;
    if (!str) str = (char*) glGetString(GL_EXTENSIONS);
    return = strstr(str, extstring);
}

How is it lazy? Look the code is there they can cut 'n paste it, it ain’t laziness and they don’t have to link to any lib.

Calling an objection lazy misses the point, the real issue is confusion caused by wglGetProcAddress calls returning non NULL values for functions that glGetStrings says aren’t supported. THAT is the heart of problem and you’ll see that clearly if you read his very first post.

How about wglAllocateMemoryNV? This function doesn’t belong to any extension!!! Actually it belong to WGL_NV_vertex_array_range but this extension are not listed in extension string!!! It might be a driver bug. Using this function app can allocate GPU memory and use for old fashion transfer (with fences). If I use standard approach (check for ext and the map entry points (if any!?)), I’ll never get wglAllocateMemoryNV entry point.

Because of that I have made small app that parse glext.h and wglext.h and generate cpp code to map ALL known entry points. Then I check all known extension is it supported and store result in one big structure with bool variables for each known extension. App need to check is some variable true or false before using some extensin feature.

Every few weeks Im checking for new updated glext.h and wglext.h file on OpenGL registry page and regenerate cpp ext mapping file.

If someone like this approach I can send this app for free.

yooyo

err… that’s a wgl extension. It shouldn’t be listed in the OpenGL extension strings (or maybe it should for compatability, but that’s probably limited to older stuff, the one you mention is relatively recent).

Try wglGetExtensionsStringEXT, although you probably have to query for that (in the gl string?) … sigh ///.

Originally posted by dorbie:
[b]Or just use this:

char *isExtensionSupported(const char *extstring)
{
    static char *str = NULL;
    if (!str) str = (char*) glGetString(GL_EXTENSIONS);
    return = strstr(str, extstring);
}

How is it lazy? Look the code is there they can cut 'n paste it, it ain’t laziness and they don’t have to link to any lib.[/b]
A simple strstr lookup isn’t sufficient for a reliable lookup because several extension names are substrings of other extension names. Take for example the following extensions:

GL_EXT_texture_env_add
GL_EXT_texture_env_combine
GL_EXT_texture_env_dot3
GL_EXT_texture_env

A strstr lookup for “GL_EXT_texture_env” would find the first extension in the list. Adding a space at the end of the name would work unless the extension is at the end of the list and there is no trailing space. The lookup mechanism is a bit more complicated considering the fact that the extension list is vendor supplied and there are no guarantees about what’s supported and what’s not.

A quick scan of the registry shows that the following extension names are substrings of other, longer (and probably newer) extension names:

GL_ARB_fragment_program
GL_ARB_shadow
GL_EXT_pixel_transform
GL_EXT_texture
GL_EXT_vertex_array
GL_NV_fragment_program
GL_NV_register_combiners
GL_NV_texture_shader
GL_NV_vertex_array_range
GL_NV_vertex_program
GL_SGIX_async
GL_SGIX_pixel_texture

My motto is to go the extra mile and make sure your code is going to work even when the vendor’s driver doesn’t follow the specs.

tranders…

Originally posted by idr:
The decision was to help people not write buggy code instead of putting a band-aid on the API. That’s what I’m trying to do right now. :slight_smile:
IMO, if a function were provided that performed a reliable check for the existence of a fully supported extension then everyone’s parsing task would be greatly reduced and buggy code could easily be eliminated. Of course you would still have to query for the IsExtensionSupported function :frowning: Parsing the extension string is reasonably easy, but definitely not without its pitfalls.

tranders

How about wglAllocateMemoryNV? This function doesn’t belong to any extension!!!
Yes, it does. It belongs to NV_vertex_array_range. Check the VAR spec.