Why OpenGL is so dynamic?

Hello

I’m wondering why OpenGL is so dynamic, and not a bit more static like CPU programming (C, C++…).

With static programming, you can use a compiler, either to make an executable running faster, either to check errors in your code.

For example, I don’t understand why, if I want to use a 2D texture in a program, I need to call glEnable(GL_TEXTURE2D), and if I don’t, nobody can tell me if I did something wrong…
The only result will be no texture in my scene, and if I’m not an opengl expert, it will take at least 10 minutes to find the bug…

OpenGL is so long and so hard to debug. I may be a novice but I think that using a bit more static programming could make openGL less painful and easier to use…

Maybe I’m missing something. So my question is: Do you know why the designers of OpenGL did these choices?

Thank you!

I do not think this applies only to OpenGL.
For example, in C++ if you don’t initialize variable or worse - initialize it with wrong value in constructor, then depending on complexity of your program it still can take ten minutes or even hours to debug and find this problem. Not even mentioning various memory problems in C/C++.
It’s just how programming is.

The reason is that OpenGL is a complex, ugly state machine. The original OpenGL 3.0 proposal replaced the state machine by immutable objects, which would function similar your idea.

While OpenGL 3.0 didn’t deliver on that proposal, the new, “forward-compatible” API is much better in this regard: it reduces available state (no need to enable/disable lighting, texturing etc, no “current” matrices etc etc) and provides a cleaner and more sane API. EXT_direct_state_access builds on top of this and improves the current bind/modify/unbind API by removing the need to bind/unbind.

your never gonna get all the kinks out of it just because there is so many things you could do, and as Stephen said forward-compatible mode fixes things.
But still most openGL problems is the developer doing something else that was not intended but may yield a “correct” result, like improper transformations or maybe using data the wrong way, the problem is that the compiler cant work out all those problems, so in the end you still have to become a bit of an expert.

But yea, texturing needs to be fixed, i wish they would just deal with it the same way we use vertex buffers by just linking them directly to the samplers in the fragments shader, and seeing as the fragment shader is mandatory this would be no problem at all.

Do you know why the designers of OpenGL did these choices?

Keep in mind that OpenGL is an hardware abstraction.
Especially at the start (1992), the API structure was very close to how the 3D hardware worked :
glEnable(GL_TEXTURE2D); // will switch a bit on one of the hardware registers to enable the 2d texturing stage.

How do you expect a compiler to catch that you wanted texturing but you did not say so ?
By the way this particular command is a NOP when using with GLSL.

As said above, GL3.0 with forward compatible context, or GL3.1, does a lot a cleanup and modernization on the API.

Thanks a lot for all these answers.
I’ll remember that it’s just a hardware abstraction. It is at least better than assembly :slight_smile:

How do you expect a compiler to catch that you wanted texturing but you did not say so ?

I won’t expect that from a compiler, but it could be easier if this register was automatically set. If a part of a program use 2D textures, it contain enough information to know that the 2D texture bit should be activated before this part and deactivated after this part.

However, maybe I have this point of view because I used to program on CPU, which is much more a generic architecture (and much slower…).

I didn’t have a look at OpenGL 3.1 but I think I should, but before I need to finish my final internship report… :frowning:

If a part of a program use 2D textures

This means glEnable(GL_TEXTURE2D); is called.
Otherwise, the program only prepares and setups the texture(s). Sometimes you only want to activate texturing for a small part, and keep untextured for other parts, such as wireframe, pure color gradients, etc, all on the same frame. Enabling texturing by default would be even more confusing.

hay i m beginner and in side of Zbuffer that
if there is color for polygon and texture for polygon
textured polygon may effect colored one and colored may effect textured

ok we leave this

if we do something like when
Light function call then light turnned on
then what about turning off for other purpose
these things are needed cause these make something simple and more basic
cause if function derived from these thing can be considered child of it not master
(sorry for weak English and i hope it will right for what you ask)