Teaching OpenGL

From another thread:

[QUOTE=Aleksandar;1238234]Hey guys, how about moving discussion to another thread? All stuff now relates to teaching OpenGL and have nothing in common with display lists.

Teaching graphics is really interesting topics. Since 2004 I’ve been thinking of switching to pure shader based OpenGL course as a part of Computer Graphics course on undergraduate studies, but until now nothing is changed. I’ve been discourage by my coleagues, suggesting that shader based aproach is complicated for that kind of course. Even with immediate mode rendering it is pretty hard for students to do everything I give them for lab exercises. Since this is the first and only CG course, there are a lot of concepts they have to accept.

As you’ve probably noticed, I want to do everything by myself, and that’s the policy I imposed to my students in order to make them understand what’s really happening under the hood. Writting a framework for dealing with shaders is not a simple task. But if I do it for them, they’ll probably copy that framewor for other projects as being ultimate solution, which is certainly not. We have to give students knowledge how to use OpenGL not certain home-built framework.[/QUOTE]

I teach two graphics courses, one purely about CG and one where advanced CG topics relevant for game programming are a large part. The first one is extremely old, founded some time in the early 80’s, and I got it on my hands a decade ago. When I got it it had no ambition for high performance whatsoever, which I immediately changed. OpenGL got in soon. Shaders entered in 2004 when I started the second course. At that time, a"pure shader" was out of the question. Shaders were moved to the first in 2008 (because the topic was so important that it couldn’t wait for the second). This winter, I kicked out immediate mode for good, for the simple reason that 3.2 is finally propagated far enough to be considered commonplace. (There were still students in the course who had problems with unsupported OSes.)

This was a major revision, of course. All labs were completely rewritten, much of my course book was rewritten, and all lectures had to be revised. Much work, lots of just-in-time changes. The response from the students was very positive. They liked to know that they were taught modern CG, and the price in complexity was worth it (and it wasn’t as bad as it seems since all transform code turns into straight math operations and you actually win some ground there, simplification). The students performed at least as well as before (I would say that the number of downright amazing student projects grew), and in the course evaluation, they liked my approach and I even got praise for my overdrive-speed-rewritten text book as one of the best they had had (despite all the glitches that have to follow a fast revision). So the move was quite successful so I think it was worth the work.

The question of support code, frameworks, add-ons, whatever you may call them, is more relevant than before. In the labs, we use quite a bit of in-house support code for managing shaders, loading textures and models and for math operations, and that package had to grow when moving to 3.2.

So if you teach OpenGL, have you moved to 3.2 or similar non-depricated OpenGL, and how was the move?

I moved to 3.2 core teaching nearly a year ago. One open question was how many students still have older hardware or Intel embedded with lower specs. So the first thing to do in the semester was to let all students query there OpenGL version and compile and run our quickly developed framework (they didn’t get everything from the beginning but just a GLFW application with GLM and GLEW(GLEW was patched to support core profile on MacOS X)). Turns out: from nearly 100 students only 4 or 5 (I don’t have the numbers at hand) had too old GPUs or were Mac users who didn’t upgrade to 10.7 (hint: on OS X you have to request a 3.2 core context, not 3.1, not 3.0, not one without a version number and hope for the latest, exactly 3.2). Those got an account for our lab.

Code to load shaders from files, print compile and linker errors/warning etc. is probably the most important support. Also the basic application where they had to implement the draw calls depending on the assignment checked for OpenGL errors, even initiated ARB_debug_output callbacks if that was supported. When we discussed VBOs, I let them code all the GL calls themself, when we discussed shader compiling and linking I let them code that as well. After each leasson they get classes to do this for them.

Yes, you have to hide a lot of complexity and only focus on one problem/asperct per week, but it is doable to teach pure 3.2 core in an introductory CG course. The feedback was good and it’s even more obvious to the students why we teach the basics and the math behind graphics: you have to implement it on your own in the shaders (these is no magic gluLookAt and glVertex() so you don’t need to understand the modelview etc.).

On a personal note, I’m strongly against teaching the immediate mode in 2012 as providing one vertex at a time (and a predefined set of vertex attributes) is not how the GPU works and it’s not how graphics work anymore. If I would teach my students outdated stuff with no relevance to the real world that does not even support them in understanding relevant basics, I would fail as a teacher IMHO.

tl;dr:

  • there is no reason to stick to the immediate mode for teaching
  • teaching core and shaders from the beginning works
  • udate your curriculum

[QUOTE=menzel;1238314]I moved to 3.2 core teaching nearly a year ago. One open question was how many students still have older hardware or Intel embedded with lower specs. So the first thing to do in the semester was to let all students query there OpenGL version and compile and run our quickly developed framework (they didn’t get everything from the beginning but just a GLFW application with GLM and GLEW(GLEW was patched to support core profile on MacOS X)). Turns out: from nearly 100 students only 4 or 5 (I don’t have the numbers at hand) had too old GPUs or were Mac users who didn’t upgrade to 10.7 (hint: on OS X you have to request a 3.2 core context, not 3.1, not 3.0, not one without a version number and hope for the latest, exactly 3.2). Those got an account for our lab.
[/QUOTE]
OS X is indeed an important reason to use 3.2 and no other version. There were Mac users with older versions who had to upgrade (myself included) but there were also some free Unix that didn’t support 3.2. But as long as we have a lab with usable computers, supporting the student’s own computers is a bit of luxuary.

We made a stripped-down GLUT clone for OSX, although FreeGLUT should work as well. Apple’s built-in GLUT is, however, not updated.

Code to load shaders from files, print compile and linker errors/warning etc. is probably the most important support. Also the basic application where they had to implement the draw calls depending on the assignment checked for OpenGL errors, even initiated ARB_debug_output callbacks if that was supported. When we discussed VBOs, I let them code all the GL calls themself, when we discussed shader compiling and linking I let them code that as well. After each leasson they get classes to do this for them.

I didn’t have them writing shader loading code, but they did work directly with VBOs and VAOs, as well as writing their own “look-at” function.

Yes, you have to hide a lot of complexity and only focus on one problem/asperct per week, but it is doable to teach pure 3.2 core in an introductory CG course. The feedback was good and it’s even more obvious to the students why we teach the basics and the math behind graphics: you have to implement it on your own in the shaders (these is no magic gluLookAt and glVertex() so you don’t need to understand the modelview etc.).

On a personal note, I’m strongly against teaching the immediate mode in 2012 as providing one vertex at a time (and a predefined set of vertex attributes) is not how the GPU works and it’s not how graphics work anymore. If I would teach my students outdated stuff with no relevance to the real world that does not even support them in understanding relevant basics, I would fail as a teacher IMHO.

tl;dr:

  • there is no reason to stick to the immediate mode for teaching
  • teaching core and shaders from the beginning works
  • udate your curriculum

Up to OpenGL 2.1, starting with Immediate Mode was kind of customary, no least since both the Red Book, Angel and Hearn&Baker all did it. So I kind of followed their example, but only for the very first lab and then we moved to glDrawElements. But when I see the effect of skipping it, I believe that that habit was indeed not a good one. The students did get a comfortable backdoor, which I often used myself for pure prototyping purposes, but some never realized that the subsequent move to arrays was important and got stuck with immediate mode and of course they got performance problems. Now, when they get the VAOs as primary tools, nobody has that problem. They get a tougher start, but a better result.

But still, I am uncomfortable with seeing a so much less intuitive API. The old one was more self-explanatory. The move was necessary for performance reasons, but I ask myself if we couldn’t package it differently. We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?

I find it was elegant a long time ago, before shaders.
But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.

[QUOTE=ZbuffeR;1238323]I find it was elegant a long time ago, before shaders.
But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.[/QUOTE]
There is of course one problem to take into account: that OpenGL fits any language. I think that is a great feature that makes it last, you can bring the OpenGL knowledge over language and OS borders. That means no class libraries, not locking it into one specific object model or other language specific constructs. But I wouldn’t rule out the possibility to make a smoother interface that is still so close to OpenGL that it doesn’t hide it but just help it.

Do we have anyone here teaching OpenGL who havn’t taken the 3.2-or-similar jump, or who did it and have other experiences?

I’m about to start teaching 3D graphics to some co-workers soon (post university level) and I’ve decided to go with WebGL (which is equal to OpenGL ES 2.0). That’ll work cross-platform, no compiling speeds up iteration time, and thus I’m hoping will allow more experimentation and a greater understanding of the material. I thought it would make more sense to start with shaders first, and then move backward to the vertex submission pipeline, but I get the impression that you guys all start with the vertex submission.

I considered starting with shaders, and that is still an open question to me. If I start with shaders I must provide a bigger main program, but if they are not supposed to look at that for the first lab, that doesn’t have to be a problem. The advantage would be that the students can get a firm grip of the shader concept before dealing with the details of the main program.

WebGL is much hyped, and of course web browser based graphics has its value, but using a scripting language will bog down the CPU side. Are we using inefficient scripting languages just because we have too slow compilers? Too much energy is wasted on scripting engines already. I heard about JIT compilers for JavaScript that might help, but even JIT compiled Java is pretty inefficient so I don’t expect much better from JavaScript. WebGL is included in my course at the end, as one of the “where to go from here” subjects, but I hesitate to go there for the whole course. Lower performance, no free choice of programming language any more, but you get it in the web browser. Not obviously the right path.

I’m glad to hear I’m not the only one who has thought of starting with shaders.

I actually agree with you on this, if you have control over the entire curriculum. In my case this is just an after hours course for interested co-workers, and I’m not positive they all know C++ (or any other language). They’ll likely not all know Javascript either, but that’s extremely easy to pick up. I’ll be sure to give them pointers on where to go for native OpenGL info though. As for performance, static geometry is not an issue at all of course, since that never leaves the GPU once it’s there.

But still, I am uncomfortable with seeing a so much less intuitive API. The old one was more self-explanatory. The move was necessary for performance reasons, but I ask myself if we couldn’t package it differently. We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?

Really? “Elegant and intuitive”? Are you sure you want to make that claim?

Very well; let’s test that theory. Here’s a simple fragment shader that takes a texture color, uses the alpha to blend it with a uniform color (for, say, team colors in an RTS game), and then multiplies that with vertex-calculated lighting:


in vec4 lightFactor;
out vec4 outputColor;

uniform sampler2D diffuseSampler;
uniform vec4 baseColor;

void main()
{
  vec4 maskingDiffuse = texture(diffuseSampler, texCoord);
  vec4 diffuseColor = mix(maskingDiffuse, baseColor, maskingDiffuse.a);
  outputColor = diffuseColor * lightFactor;
}

And here’s the same thing, only done with fixed-function GL 2.1 stuff:


glActiveTexture(GL_TEXTURE0);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_INTERPOLATE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_CONSTANT)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_CONSTANT)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_ALPHA, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_ALPHA, GL_SRC_ALPHA)

glActiveTexture(GL_TEXTURE1);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_MODULATE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PRIMARY)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_PRIMARY)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA)

Marvel at the more “elegant and intuitive API”. :doh:

Without looking it up in the documentation, can you even tell that the env_combine version does the same thing as the GLSL version? It’s really easy to forget what you have to do in non-shader GL if you want to do anything out of the ordinary. That’s why people say that the older API was more “elegant”; because they never tried to do something with it besides blast a vertex-lit triangle with a single texture onto the screen.

The problem with fixed-function learning is that it stunts growth. Yes, if all you want to do is blast a triangle on the screen, it’s easy. If all you want to do is apply some lighting to it, you can use simple rules. Even with a texture. But if you actually want to think, if you want to do anything even slightly unorthodox or out of the ordinary, welcome to ARB_texture_env_combine hell, where it takes over 10 lines of code for a simple multiply.

This makes people want to avoid doing interesting things. It forces them to think only in terms of the most basic, simple fixed-function functionality. Yes, you may learn fast initially, but you learn not to think for yourself, and you learn not to wander outside of the simple, fixed-function box. It gives you a false sense that you actually know something, just because you were able to throw a lit, textured object onto the screen quickly.

When it comes time for them to learn what’s actually going on, they have no idea how to do that. When it comes time for shaders, you have to basically start over in teaching them. You may as well start off the right. It may be slower going initially, but it’s great once you get the hang of it. And they have active agency in their learning and real understanding of what’s going on.

I have to agree.

I think it is important when talking about teaching you make it clear whether your students are under grads or someone who needs to learn OpenGL to use professionally. For my 2 bits; if you are talking about under grads, I feel a water-down version that focus on the fundamentals of graphics is more important. The actual OpenGL language is going to change radically over the next 10 years as hardware changes; so teaching them the nitty gritty is mostly wasted. I find it very frustating when I hire a graduate that knows say OpenGL but when you try to move them to DirectX are completely lost because of the lack a lot of fundamentals. Knowing the basic interface between the GPU and CPU is good but I don’t think it matters if they know all the calls to compile a shader.

The problem with fixed-function learning is that it stunts growth. Yes, if all you want to do is blast a triangle on the screen, it’s easy. If all you want to do is apply some lighting to it, you can use simple rules. Even with a texture. But if you actually want to think, if you want to do anything even slightly unorthodox or out of the ordinary, welcome to ARB_texture_env_combine hell, where it takes over 10 lines of code for a simple multiply.

This makes people want to avoid doing interesting things. It forces them to think only in terms of the most basic, simple fixed-function functionality. Yes, you may learn fast initially, but you learn not to think for yourself, and you learn not to wander outside of the simple, fixed-function box. It gives you a false sense that you actually know something, just because you were able to throw a lit, textured object onto the screen quickly.

When it comes time for them to learn what’s actually going on, they have no idea how to do that. When it comes time for shaders, you have to basically start over in teaching them. You may as well start off the right. It may be slower going initially, but it’s great once you get the hang of it. And they have active agency in their learning and real understanding of what’s going on.

Completely agree

WebGL from the OpenGL side is fine as it is basically OpenGL ES which is more like core than compatibility. If driving the GPU from a scripting language is such a good idea is debatable, but as a way of learning graphics programming it might be fine.
I start (after some basic math) by describing the renderingpipeline without the fragmentshaders, just vertex processing, clipping, rasterization and vertex transformations (rotation, translation by matrix multiplication). Then implementing transformations and projections using GLM. Them moving some of this code to the vertex shader and loading shaders. In parallel the theory part can handle lighting, textures, aliasing (and thus fill the gaps in the renderingpipeline). Then try out lighting and later texturing via fragment shaders. In between we have to look at VBOs/VAOs in more detail as we need user defined vertex attributes for the lighting (normals) and texture coordinates.

MacOS X is in fact a reason to stick with 3.2 but on the other hand more than 3.3 is not so common right now and even working with geometry shaders might be too much for an introductory course (the theory part however discusses what GS and tessellation shaders are for, but just on a level of what you normally do with them and where they fit into the pipeline).

Teaching Computer Graphics Basics using OpenGL and teaching high-end programming skills using OpenGL are two different things.

For example, I have classes where I have to teach undergraduate students some basic graphics stuff. They have theoretical par about many aspects of computer graphics and practical part divided into 2D and 3D graphics API usage. OpenGL is, of course, related to the second part (3D graphics API). When I started, it was a question whether to use OpenGL, or D3D, or both. I chose OpenGL and never regretted. During “OpenGL part” of the course, students have to gain some basic skills including: porting OpenGL to Windows application, basic modeling and viewing (combining transformations etc.), lighting and texturing. For all that we’ve got just 4 weeks (8 hours of teaching and 3 labs). At the end they have a lecture about shaders, but they don’t need it for the exam. If I try to teach them a modern approach, I’ll spend all the time just for the setup. Also, I’m not a fan of the libraries (extension handling, math, etc.). Implementing all these is a huge amount of work for just one month.

On the other hand, I’m planning a totally new course that will guide students through the whole 3D pipeline. It would be completely based on GLSL. In this part I have a question for the community. Should I base it on separate shader architecture, and should I use DSA approach?
It must follow GLSL specification, but be as clean and straightforward as possible.

Also, it would be nice to see some other one-semester CG curricula using OpenGL.

I am going to share my experiences with teaching 3D graphics.

One one hand, I prefer to do the maths first (projection and projection matrices, perspective correct interpolation, projective (aka clip) coordinates, normalized device coordinates, orientation, texturing- filters and mipmapping). But that is a hard ride for most students. Linear algebra skills are often quite lacking and there are so many concepts that need to be introduced at the same time. The above is needed to explain the anatomy of shader to just draw a triangle on the screen. In addition needing to go through explaining the differences between attributes, varyings and uniforms (or ins, out and uniforms for GL3+) and oh, and all the awful setup code to make a shader, etc.

On the other hand, starting with fixed function pipeline allows one to introduce each of these concepts a touch more gently and naturally. I am NOT talking doing multi-texturing or lighting with the fixed function pipeline, but the start material to just get a textured triangle on the screen. Once each of the concepts (projection, clip coordinates, normalized coordinates, texturing and orientation are pat), then one can move to a simple shader, and then move onto lighting, effects, etc [and avoid the clumsy fixed-function multi-texturing interface]. Additionally, starting from immediate mode and moving the glDrawElements (and friends) gives a natural easy way to explain the differences between ins, outs and uniforms. As a side thought, one can also use the immediate mode model to better explain emitvertex() of geometry shaders.

Should I base it on separate shader architecture, and should I use DSA approach?

I have another bit of advice: I would not start with SSO, but introduce later because again, there is a risk of too many concepts too soon. As for DSA, the ugly is that EXT_direct_state_access is just an extension with all the hairy warts and peculiarities of the fixed function pipeline past. Admitted the whole “bind-to-edit” thing just plain-flat sucks, but until the edit without bind is core, I would not teach DSA.

Additionally, although I have much more fun with desktop GL, a great deal of action commercially is GLES2… which in my opinion just sucks to use at times and both SSO (though there is an extension for it on iOS) and DSA are not available on almost all GLES2 implementations. As a side note GLES2 has all sorts of “interesting issues” that are almost always unpleasant when encountered as a surprise…

If one is not worried about the embedded interfaces and can assume GL3+ (or rather a platform that has GL2.1 and GL3+ core) then a next section of the class would cover (if not already) buffer objects, uniform buffer objects, texture buffer objects and transform feedback… and geometry shaders…

But if dong embedded, a section on tiled based renderers is a must when getting into render to texture.

I would strongly advise providing a header file with macro magicks to check for GL errors after each draw call in debug builds so students can more quickly pin point what GL call went bad [or use debug context, but that without macro magic won’t give a line number or file].

Lastly, and sadly, a good section on the ugly fact (which is more horribly true in embedded) techniques for pin-pointing if an error is the coder or the driver [for desktop it is sooo much rarer, but on embedded, everyday is a fight].

If you get to teach GL4+ I am so envious… almost all my GL teaching is of the form of 1-week training for commercial and it is almost always GLES2 with that most of the time linear algebra skills are lacking.

If I start with separate shader objects they won’t be aware that a concept of monolithic program exists. Personally, I’m still don’t use separate shader objects, but if it is something widely used (or will be), maybe it is better to introduce the concept as soon as possible.

[QUOTE=kRogue;1238363]I would strongly advise providing a header file with macro magicks to check for GL errors after each draw call in debug builds so students can more quickly pin point what GL call went bad [or use debug context, but that without macro magic won’t give a line number or file].
[/QUOTE]
I’m using debug_output along with VS debugger. A call-stack pinpoints error precisely in most the cases.

Thanks for sharing your experience! The course I’m talking about is still under consideration. There should be a part about GPU architecture preceding 3D pipeline execution path, and also a part about debugging and profiling at the end of the course. When I come to the realization I’ll consult you again. :wink:

[QUOTE=Alfonse Reinheart;1238342]Really? “Elegant and intuitive”? Are you sure you want to make that claim?
[/QUOTE]
Partially. The basic API is elegant and intuitive, but certainly not all additions. As in many other APIs, additions are all to often tacked on without so much care for the design. Your example of texture combiners is one where OpenGL really went wrong in the fixed pipeline. It was hairy and too little too late. Shaders made it obsolete over night. I am happy that I never taught them in my courses but turned to shaders long ago for any blending problems beyond the basic ones. Another example where I am not all that happy is multitexturing, with the somewhat hairy multiple texture coordinates. But the current API can be as hairy in places. Shaders feel unnecessarily complex, especially the multiple ways to specify shader variables from the main program.

We can do better, the question is how.

My favourite complaint about the API is the high dependency of global states. Much of the effort of programming design elsewhere has been into localizing states, reducing risks and side effects.

And yes, I know that the design of the GPU means the state changes have to be exposed to the programmer to allow for efficient applications. But it is a problematic part of the API.

[QUOTE=Kopelrativ;1238459]My favourite complaint about the API is the high dependency of global states. Much of the effort of programming design elsewhere has been into localizing states, reducing risks and side effects.

And yes, I know that the design of the GPU means the state changes have to be exposed to the programmer to allow for efficient applications. But it is a problematic part of the API.[/QUOTE]
And this is one of the good things in 3.2: Fewer hidden states to keep track of. No current matrices, light sources, texture coordinates that you set and forget. Any such carelessness is more visible today.

Any such carelessness is more visible today.

I disagree, for instance with bind to modify remaining in imporant areas and especially when prototyping something quickly, such carelessness is still easy to miss. And it’s not a matter of visibility, it’s a matter of non-existence - something that existed earlier just isn’t there anymore. That doesn’t mean that in areas that remain, state is anymore visible than in areas that were removed. In larger systems, you still don’t have any clue which buffer object is bound to GL_ELEMENT_ARRAY_BUFFER or if the current active texture unit has TEXTURE_2D and TEXTURE_CUBE_MAP set simultaneously, unless you don’t have some strategy for tracking the state yourself, some strategy to bind to targets without collisions or some strategy to unbind everything right after usage. How many drawbuffers are active on the current FBO again? Is it a readbuffer? Oh, damn, that blend function isn’t correct in this place. You can always use glGet*() to retrieve current state but nobody wants that in a real-time application. So, there’s still plenty of state left that can lead to false results all over the place.

Those problems are all easily resolved by simply binding to modify and unbinding after the modification. Then, you don’t have to care what’s bound to GL_ELEMENT_ARRAY_BUFFER, because you know it’s nothing.

Keep your changes local and you won’t have a problem.

How many drawbuffers are active on the current FBO again? Is it a readbuffer?

I don’t see how DSA helps with that. And what is a “readbuffer”?