glGetString(GL_EXTENSIONS_15);

When you query for extensions string with glGetString(GL_EXTENSIONS); you got all the supported extensions. Some of them are obsolete (compiled vertex array, ext_texture_env_add and all the ones promoted to arb extensions…) and some of them are part of core of the latest OGL revisions (bgra, draw range elements, …)

All those extensions have to be exposed because one application written at OpenGL 1.1 time (for example) can be querying for one of it.

But it also confusing for the newbie to find them. I have a friend learning OpenGL and he founds those ext_ xxx and arb_xxx extension with the same name, and he ask me about them; what one has to use, which one is better, …

What I think that it will be good is a mechanism to query the existing extensions depending on the OpenGL revision you are writing for. For example, if I know that I’m writing an application that requires OpenGL 1.4, then I can query for extensions for the 1.4 revisions (and not for obsolete ones or features that are in the core). It can be something like glGetString(GL_EXTENSIONS_1_4). That way, the glGetString(GL_EXTENSIONS) also exist and old applications are still compatible.

Hope this helps.

That’s a very interesting idea. There are orthogonal components like the optional imaging subset. It possibly could cause an explosion of queryables.

glGetString(GL_EXTENSIONS_1_2);
glGetString(GL_EXTENSIONS_1_2_1);
glGetString(GL_EXTENSIONS_1_3);
glGetString(GL_EXTENSIONS_1_4);
glGetString(GL_EXTENSIONS_1_5);
glGetString(GL_EXTENSIONS_1_2_IMAGING);
glGetString(GL_EXTENSIONS_1_2_1_IMAGING);
glGetString(GL_EXTENSIONS_1_3_IMAGING);
glGetString(GL_EXTENSIONS_1_4_IMAGING);
glGetString(GL_EXTENSIONS_1_5_IMAGING);

Now if somewhere along the line, someone makes a few other optional subset. The number of enumerants grows exponentially.

Maybe add an extension for querying “modern” extensions. We can pass in an array of GL_VERSION_'s, and GL_OPTIONAL_'s.

glGetExtensionsvARB(array_of_enumerants);

It’s not a bad idea. This forum section has seen worse

I just want to add that one motivation to this may (!) be buggy applications that attempt to copy the extension string into a fixed size buffer. This is bad design, so it deserves eternal punishment. It’s pointless to copy the extension string at all.

I’m not trying to dismiss the idea though. It just shouldn’t be used as a workaround for broken application code.

And if you do need to copy the extension string for some reason, you can just use strdup().

Since this is a forward looking forum, imagine 2015, I’m on the ISS Freedom(International Space Station) writing Doom 7-Hell In Space. OpenGL is now version 2.3.1 . And they haven’t changed the extension retrieval mechanism, (because I’m making a point by contradiction).

OpenGL has 3 optional imaging subsets for statistical analaysis. The medical community has created 5 optional imaging subsets for medical imaging. NASA has 2 optional imaging subsets for non-visible spectrum analysis.

(For convenience) there are 4 major graphic cards companies, who have released about 30 combined extensions a year.

If I finish my proposals, by then I’ll have 10ish extensions with really long names. GL_MYREALNAME0123_USER_DEFINED_CLIP_MESHS, GL_MYREALNAME0123_BUFFER_PROGRAM, and others.

Each version of OpenGL core functions have evolved from previous extensions. So by this point each version of OpenGL has contributed 3 extensions on average.

The average extension name is 22 characters. (The mean of the glext in my test folder).

Name sizes
1022 for optional imaging extensions
30
1022 for vendor extensions
10
40 for my personal approved extensions
(3+5)322 for core extensions
Just over 8K for the extension string, including alignement material.

glGetString(GL_EXTENSIONS) returns 8K of data in hundred of extensions.

glGetString(GL_EXTENSIONS_VERSION_OPTIONALS)
has 2 raised 10 optionals and 13 versions for 131024 enumerants. 13K possible constant values must be enumerated in the GL header. A naive driver would implement this directly in a table of 13K4K, for 52M of data required in static tables. A less naive driver could implement this in 4MB. And a space-conscious driver in a hash of dynamically allocated tables(with lifespan of the owning context).

glGetExtensionsvARB(Array of extensions and optionals) has 23 enumerants, but 13K logical function calls(millions of illogical ones). Again, a naive driver would have to implement the table directly for 52M of data. A less naive one in 4MB. And a space conscious one dynamically.

As for run-time, it comes down to parsing 8K versus R, the requested extension set. Usually extensions are only parsed once, (hence the glGetExtensionsvARB, and the new glGetString are also only called once). R tends to be signifigantly smaller than 8K, for most calls.

If anyone can post the number of extensions available on each version of OpenGL, that would be useful. (Exclude all subsumed extensions, like when vertex arrays were subsumed into OpenGL 1.1) This will require some work.

Chemdog, first of all, I’d like to inform you that the American Medical Association recommends that you smoke no more than three crack pipes a day.

Second, extensions are not maintained forever. I would guess that the average extension has a lifespan of about 3 to 4 years. Older extensions that are not frequently used are deprecated. Personally, I dislike deprecation, but that’s the way it is.

Finally, 11 years from now, assuming Moore’s Law applies all that time, PCs will have about 80 GB of ram. So even if it did require 4 megs to store some text, it wouldn’t matter all that much.

Quoting Cab:

All those extensions have to be exposed because one application written at OpenGL 1.1 time (for example) can be querying for one of it.

Wrong. When you rely an extension in an application, you risk your application being broken in the future.

But it also confusing for the newbie to find them. I have a friend learning OpenGL and he founds those ext_ xxx and arb_xxx extension with the same name, and he ask me about them; what one has to use, which one is better, …

This is not the way to solve this problem. Tell your friend to read about the extensions to find out which one is better. If he is using the extensions string as his sole reference for OpenGL, he has bigger problems.

Chemdog, first of all, I’d like to inform you that the American Medical Association recommends that you smoke no more than three crack pipes a day.

A personal flame, a little late for V-Day, me boy. But just in time for Saint Patrick’s, when rveryone is Irish, so I have to let it slide.

First of all, the AMA recommends that you “Don’t Abuse Drugs”, whether illegal, prescription, or OTC. Inappropriate drug use can be reported to the local authorities. And kids “Be Cool, Stay In School”.

Wrong. When you rely an extension in an application, you risk your application being broken in the future.

You are right, but that’s not to which Cab was referring.

If an implementor chooses to support version 1.x, and exposes extension y, then when driver version 1.(x+1) comes out, extension y should still be exposed. (should, implementors can be daring).

If I write an application today using some extensions(and I program intelligently, so I have multiple fallbacks and alternate pipelines), then future minor versions of OpenGL will not break my code. (Every 1.* version, is upwardly compatible with the previous).

Older extensions aren’t deprecated, they just lose support. Although you could argue that this is deprecation, there will be implementors who choose to support older extensions.

Even 20 years from now, I can still write code for “GL_EXT_abgr”, and I can still use (intelligently written) applications that query that extension(ones that rely on some form of fallback, or alternate pipeline).

And some extensions don’t die, they ascend into core features.

Assuming Moore’s law, in 11 years, there will be a 8 doublings, which will occur in cost reduction or transistor density. So I just need to come up with an object totalling 8 doubling differences. Generally, I would just put today’s PC in tomorrow’s wristwatch, and the math comes out about right. So the problem is still valid, though the domain is possibly more restricted.

The first time I used OpenGL for a large project, it was for a physics simulation you wouldn’t understand. (No offense, it’s just post-doc physics that would take about as long to explain as it did to implement.) I was used to writing my own rendering packages, (because OpenGL didn’t exist when I released my first game), so learning it was not difficult. (For reference, the project used GL_EXT_vertex_array). But for physicists, mathematicians, engineers, the RedBook, and a computer science friend may be all that you might have to learn by.