OpenGL extensions misunderstanding

Hi,

I’m wondering why some OpenGL examples I found were always checking that some extensions were supported by the graphic card before making the corresponding OpenGL calls. As instance, making sure the graphic card supports “GL_ARB_shading_language_100”, and then using the core function “glGenShader(…)”. I thought these extensions were to be checked for “…ARB” functions only, since if the graphic card supports a version of OpenGL, it also supports all of its core functions… Or am I wrong ?

I’m also a bit confused with other extensions which have no associated GL calls, such as “GL_ARB_texture_non_power_of_two”. Is it also concerning only ARB functions, or does it count for all “texture…” functions ?

And sorry, for the example I meant “GL_ARB_shader_objects” and “glCreateShader(…)”

Here’s the first thing you need to understand about OpenGL examples: they are almost always full of bad practice. There are some that are good, but cargo-cult programming is an epidemic in the OpenGL example code ecosystem.

More often than not, they’ll just do something because they copied them from others where they made sense. For the example you gave, you’re probably looking at someone who copied their code from an example that was actually written to use the extension shaders. But then they upgraded it to use core without removing the extension checking.

You should check what you need to check, not what some example code tells you to check. If you’re using version 3.3, then check for version 3.3.

I’m also a bit confused with other extensions which have no associated GL calls, such as “GL_ARB_texture_non_power_of_two”. Is it also concerning only ARB functions, or does it count for all “texture…” functions ?

texture_non_power_of_two changes the meaning of existing functions. That is, when this extension is defined, the functions do different things. You check the extension so that you know if those different things can be done. If it is defined, then the glTexImage2D and so forth functions can take non-power-of-two sizes. If it isn’t defined, then passing NPOT sizes will cause a GL error.

This is again one of those unnecessary and pointless things many GL examples will do due to legacy issues and cargo-cult programming. You should just check for version 2.0, where it’s core behavior.

Thank you for your answer, it’s a lot clearer for me know :slight_smile:
And indeed, it is not the first time I find some weird practice in OpenGL tutorials.