PDA

View Full Version : Teaching OpenGL



Ragnemalm
06-03-2012, 09:35 AM
From another thread:


Hey guys, how about moving discussion to another thread? All stuff now relates to teaching OpenGL and have nothing in common with display lists.

Teaching graphics is really interesting topics. Since 2004 I've been thinking of switching to pure shader based OpenGL course as a part of Computer Graphics course on undergraduate studies, but until now nothing is changed. I've been discourage by my coleagues, suggesting that shader based aproach is complicated for that kind of course. Even with immediate mode rendering it is pretty hard for students to do everything I give them for lab exercises. Since this is the first and only CG course, there are a lot of concepts they have to accept.

As you've probably noticed, I want to do everything by myself, and that's the policy I imposed to my students in order to make them understand what's really happening under the hood. Writting a framework for dealing with shaders is not a simple task. But if I do it for them, they'll probably copy that framewor for other projects as being ultimate solution, which is certainly not. We have to give students knowledge how to use OpenGL not certain home-built framework.

I teach two graphics courses, one purely about CG and one where advanced CG topics relevant for game programming are a large part. The first one is extremely old, founded some time in the early 80's, and I got it on my hands a decade ago. When I got it it had no ambition for high performance whatsoever, which I immediately changed. OpenGL got in soon. Shaders entered in 2004 when I started the second course. At that time, a"pure shader" was out of the question. Shaders were moved to the first in 2008 (because the topic was so important that it couldn't wait for the second). This winter, I kicked out immediate mode for good, for the simple reason that 3.2 is finally propagated far enough to be considered commonplace. (There were still students in the course who had problems with unsupported OSes.)

This was a major revision, of course. All labs were completely rewritten, much of my course book was rewritten, and all lectures had to be revised. Much work, lots of just-in-time changes. The response from the students was very positive. They liked to know that they were taught modern CG, and the price in complexity was worth it (and it wasn't as bad as it seems since all transform code turns into straight math operations and you actually win some ground there, simplification). The students performed at least as well as before (I would say that the number of downright amazing student projects grew), and in the course evaluation, they liked my approach and I even got praise for my overdrive-speed-rewritten text book as one of the best they had had (despite all the glitches that have to follow a fast revision). So the move was quite successful so I think it was worth the work.

The question of support code, frameworks, add-ons, whatever you may call them, is more relevant than before. In the labs, we use quite a bit of in-house support code for managing shaders, loading textures and models and for math operations, and that package had to grow when moving to 3.2.

So if you teach OpenGL, have you moved to 3.2 or similar non-depricated OpenGL, and how was the move?

menzel
06-03-2012, 12:42 PM
I moved to 3.2 core teaching nearly a year ago. One open question was how many students still have older hardware or Intel embedded with lower specs. So the first thing to do in the semester was to let all students query there OpenGL version and compile and run our quickly developed framework (they didn't get everything from the beginning but just a GLFW application with GLM and GLEW(GLEW was patched to support core profile on MacOS X)). Turns out: from nearly 100 students only 4 or 5 (I don't have the numbers at hand) had too old GPUs or were Mac users who didn't upgrade to 10.7 (hint: on OS X you have to request a 3.2 core context, not 3.1, not 3.0, not one without a version number and hope for the latest, exactly 3.2). Those got an account for our lab.

Code to load shaders from files, print compile and linker errors/warning etc. is probably the most important support. Also the basic application where they had to implement the draw calls depending on the assignment checked for OpenGL errors, even initiated ARB_debug_output callbacks if that was supported. When we discussed VBOs, I let them code all the GL calls themself, when we discussed shader compiling and linking I let them code that as well. After each leasson they get classes to do this for them.

Yes, you have to hide a lot of complexity and only focus on one problem/asperct per week, but it is doable to teach pure 3.2 core in an introductory CG course. The feedback was good and it's even more obvious to the students why we teach the basics and the math behind graphics: you have to implement it on your own in the shaders (these is no magic gluLookAt and glVertex() so you don't need to understand the modelview etc.).

On a personal note, I'm strongly against teaching the immediate mode in 2012 as providing one vertex at a time (and a predefined set of vertex attributes) is not how the GPU works and it's not how graphics work anymore. If I would teach my students outdated stuff with no relevance to the real world that does not even support them in understanding relevant basics, I would fail as a teacher IMHO.


tl;dr:
* there is no reason to stick to the immediate mode for teaching
* teaching core and shaders from the beginning works
* udate your curriculum

Ragnemalm
06-03-2012, 10:32 PM
I moved to 3.2 core teaching nearly a year ago. One open question was how many students still have older hardware or Intel embedded with lower specs. So the first thing to do in the semester was to let all students query there OpenGL version and compile and run our quickly developed framework (they didn't get everything from the beginning but just a GLFW application with GLM and GLEW(GLEW was patched to support core profile on MacOS X)). Turns out: from nearly 100 students only 4 or 5 (I don't have the numbers at hand) had too old GPUs or were Mac users who didn't upgrade to 10.7 (hint: on OS X you have to request a 3.2 core context, not 3.1, not 3.0, not one without a version number and hope for the latest, exactly 3.2). Those got an account for our lab.

OS X is indeed an important reason to use 3.2 and no other version. There were Mac users with older versions who had to upgrade (myself included) but there were also some free Unix that didn't support 3.2. But as long as we have a lab with usable computers, supporting the student's own computers is a bit of luxuary.

We made a stripped-down GLUT clone for OSX, although FreeGLUT should work as well. Apple's built-in GLUT is, however, not updated.


Code to load shaders from files, print compile and linker errors/warning etc. is probably the most important support. Also the basic application where they had to implement the draw calls depending on the assignment checked for OpenGL errors, even initiated ARB_debug_output callbacks if that was supported. When we discussed VBOs, I let them code all the GL calls themself, when we discussed shader compiling and linking I let them code that as well. After each leasson they get classes to do this for them.

I didn't have them writing shader loading code, but they did work directly with VBOs and VAOs, as well as writing their own "look-at" function.


Yes, you have to hide a lot of complexity and only focus on one problem/asperct per week, but it is doable to teach pure 3.2 core in an introductory CG course. The feedback was good and it's even more obvious to the students why we teach the basics and the math behind graphics: you have to implement it on your own in the shaders (these is no magic gluLookAt and glVertex() so you don't need to understand the modelview etc.).

On a personal note, I'm strongly against teaching the immediate mode in 2012 as providing one vertex at a time (and a predefined set of vertex attributes) is not how the GPU works and it's not how graphics work anymore. If I would teach my students outdated stuff with no relevance to the real world that does not even support them in understanding relevant basics, I would fail as a teacher IMHO.

tl;dr:
* there is no reason to stick to the immediate mode for teaching
* teaching core and shaders from the beginning works
* udate your curriculum
Up to OpenGL 2.1, starting with Immediate Mode was kind of customary, no least since both the Red Book, Angel and Hearn&Baker all did it. So I kind of followed their example, but only for the very first lab and then we moved to glDrawElements. But when I see the effect of skipping it, I believe that that habit was indeed not a good one. The students did get a comfortable backdoor, which I often used myself for pure prototyping purposes, but some never realized that the subsequent move to arrays was important and got stuck with immediate mode and of course they got performance problems. Now, when they get the VAOs as primary tools, nobody has that problem. They get a tougher start, but a better result.

But still, I am uncomfortable with seeing a so much less intuitive API. The old one was more self-explanatory. The move was necessary for performance reasons, but I ask myself if we couldn't package it differently. We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?

ZbuffeR
06-03-2012, 11:44 PM
We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?
I find it was elegant a long time ago, before shaders.
But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.

Ragnemalm
06-04-2012, 12:38 AM
I find it was elegant a long time ago, before shaders.
But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.
There is of course one problem to take into account: that OpenGL fits any language. I think that is a great feature that makes it last, you can bring the OpenGL knowledge over language and OS borders. That means no class libraries, not locking it into one specific object model or other language specific constructs. But I wouldn't rule out the possibility to make a smoother interface that is still so close to OpenGL that it doesn't hide it but just help it.

Do we have anyone here teaching OpenGL who havn't taken the 3.2-or-similar jump, or who did it and have other experiences?

RickA
06-04-2012, 12:50 AM
I'm about to start teaching 3D graphics to some co-workers soon (post university level) and I've decided to go with WebGL (which is equal to OpenGL ES 2.0). That'll work cross-platform, no compiling speeds up iteration time, and thus I'm hoping will allow more experimentation and a greater understanding of the material. I thought it would make more sense to start with shaders first, and then move backward to the vertex submission pipeline, but I get the impression that you guys all start with the vertex submission.

Ragnemalm
06-04-2012, 02:36 AM
I'm about to start teaching 3D graphics to some co-workers soon (post university level) and I've decided to go with WebGL (which is equal to OpenGL ES 2.0). That'll work cross-platform, no compiling speeds up iteration time, and thus I'm hoping will allow more experimentation and a greater understanding of the material. I thought it would make more sense to start with shaders first, and then move backward to the vertex submission pipeline, but I get the impression that you guys all start with the vertex submission.
I considered starting with shaders, and that is still an open question to me. If I start with shaders I must provide a bigger main program, but if they are not supposed to look at that for the first lab, that doesn't have to be a problem. The advantage would be that the students can get a firm grip of the shader concept before dealing with the details of the main program.

WebGL is much hyped, and of course web browser based graphics has its value, but using a scripting language will bog down the CPU side. Are we using inefficient scripting languages just because we have too slow compilers? Too much energy is wasted on scripting engines already. I heard about JIT compilers for JavaScript that might help, but even JIT compiled Java is pretty inefficient so I don't expect much better from JavaScript. WebGL is included in my course at the end, as one of the "where to go from here" subjects, but I hesitate to go there for the whole course. Lower performance, no free choice of programming language any more, but you get it in the web browser. Not obviously the right path.

RickA
06-04-2012, 05:19 AM
I'm glad to hear I'm not the only one who has thought of starting with shaders.


WebGL is much hyped, and of course web browser based graphics has its value, but using a scripting language will bog down the CPU side. Are we using inefficient scripting languages just because we have too slow compilers? Too much energy is wasted on scripting engines already. I heard about JIT compilers for JavaScript that might help, but even JIT compiled Java is pretty inefficient so I don't expect much better from JavaScript. WebGL is included in my course at the end, as one of the "where to go from here" subjects, but I hesitate to go there for the whole course. Lower performance, no free choice of programming language any more, but you get it in the web browser. Not obviously the right path.
I actually agree with you on this, if you have control over the entire curriculum. In my case this is just an after hours course for interested co-workers, and I'm not positive they all know C++ (or any other language). They'll likely not all know Javascript either, but that's extremely easy to pick up. I'll be sure to give them pointers on where to go for native OpenGL info though. As for performance, static geometry is not an issue at all of course, since that never leaves the GPU once it's there.

Alfonse Reinheart
06-04-2012, 05:41 AM
But still, I am uncomfortable with seeing a so much less intuitive API. The old one was more self-explanatory. The move was necessary for performance reasons, but I ask myself if we couldn't package it differently. We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?

Really? "Elegant and intuitive"? Are you sure you want to make that claim?

Very well; let's test that theory. Here's a simple fragment shader that takes a texture color, uses the alpha to blend it with a uniform color (for, say, team colors in an RTS game), and then multiplies that with vertex-calculated lighting:



in vec4 lightFactor;
out vec4 outputColor;

uniform sampler2D diffuseSampler;
uniform vec4 baseColor;

void main()
{
vec4 maskingDiffuse = texture(diffuseSampler, texCoord);
vec4 diffuseColor = mix(maskingDiffuse, baseColor, maskingDiffuse.a);
outputColor = diffuseColor * lightFactor;
}


And here's the same thing, only done with fixed-function GL 2.1 stuff:



glActiveTexture(GL_TEXTURE0);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_INTERPOLATE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_CONSTANT)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_CONSTANT)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_ALPHA, GL_TEXTURE)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_ALPHA, GL_SRC_ALPHA)

glActiveTexture(GL_TEXTURE1);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE)
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_MODULATE)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PRIMARY)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS)
glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_PRIMARY)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA)
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA)


Marvel at the more "elegant and intuitive API". :doh:

Without looking it up in the documentation, can you even tell that the env_combine version does the same thing as the GLSL version? It's really easy to forget what you have to do in non-shader GL if you want to do anything out of the ordinary. That's why people say that the older API was more "elegant"; because they never tried to do something with it besides blast a vertex-lit triangle with a single texture onto the screen.

The problem with fixed-function learning is that it stunts growth. Yes, if all you want to do is blast a triangle on the screen, it's easy. If all you want to do is apply some lighting to it, you can use simple rules. Even with a texture. But if you actually want to think, if you want to do anything even slightly unorthodox or out of the ordinary, welcome to ARB_texture_env_combine hell, where it takes over 10 lines of code for a simple multiply.

This makes people want to avoid doing interesting things. It forces them to think only in terms of the most basic, simple fixed-function functionality. Yes, you may learn fast initially, but you learn not to think for yourself, and you learn not to wander outside of the simple, fixed-function box. It gives you a false sense that you actually know something, just because you were able to throw a lit, textured object onto the screen quickly.

When it comes time for them to learn what's actually going on, they have no idea how to do that. When it comes time for shaders, you have to basically start over in teaching them. You may as well start off the right. It may be slower going initially, but it's great once you get the hang of it. And they have active agency in their learning and real understanding of what's going on.

tonyo_au
06-04-2012, 06:02 AM
But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.

I have to agree.

I think it is important when talking about teaching you make it clear whether your students are under grads or someone who needs to learn OpenGL to use professionally. For my 2 bits; if you are talking about under grads, I feel a water-down version that focus on the fundamentals of graphics is more important. The actual OpenGL language is going to change radically over the next 10 years as hardware changes; so teaching them the nitty gritty is mostly wasted. I find it very frustating when I hire a graduate that knows say OpenGL but when you try to move them to DirectX are completely lost because of the lack a lot of fundamentals. Knowing the basic interface between the GPU and CPU is good but I don't think it matters if they know all the calls to compile a shader.

Ludde
06-04-2012, 07:38 AM
The problem with fixed-function learning is that it stunts growth. Yes, if all you want to do is blast a triangle on the screen, it's easy. If all you want to do is apply some lighting to it, you can use simple rules. Even with a texture. But if you actually want to think, if you want to do anything even slightly unorthodox or out of the ordinary, welcome to ARB_texture_env_combine hell, where it takes over 10 lines of code for a simple multiply.

This makes people want to avoid doing interesting things. It forces them to think only in terms of the most basic, simple fixed-function functionality. Yes, you may learn fast initially, but you learn not to think for yourself, and you learn not to wander outside of the simple, fixed-function box. It gives you a false sense that you actually know something, just because you were able to throw a lit, textured object onto the screen quickly.

When it comes time for them to learn what's actually going on, they have no idea how to do that. When it comes time for shaders, you have to basically start over in teaching them. You may as well start off the right. It may be slower going initially, but it's great once you get the hang of it. And they have active agency in their learning and real understanding of what's going on.

Completely agree

menzel
06-04-2012, 10:45 AM
WebGL from the OpenGL side is fine as it is basically OpenGL ES which is more like core than compatibility. If driving the GPU from a scripting language is such a good idea is debatable, but as a way of learning graphics programming it might be fine.
I start (after some basic math) by describing the renderingpipeline without the fragmentshaders, just vertex processing, clipping, rasterization and vertex transformations (rotation, translation by matrix multiplication). Then implementing transformations and projections using GLM. Them moving some of this code to the vertex shader and loading shaders. In parallel the theory part can handle lighting, textures, aliasing (and thus fill the gaps in the renderingpipeline). Then try out lighting and later texturing via fragment shaders. In between we have to look at VBOs/VAOs in more detail as we need user defined vertex attributes for the lighting (normals) and texture coordinates.

MacOS X is in fact a reason to stick with 3.2 but on the other hand more than 3.3 is not so common right now and even working with geometry shaders might be too much for an introductory course (the theory part however discusses what GS and tessellation shaders are for, but just on a level of what you normally do with them and where they fit into the pipeline).

Aleksandar
06-04-2012, 11:52 AM
Teaching Computer Graphics Basics using OpenGL and teaching high-end programming skills using OpenGL are two different things.

For example, I have classes where I have to teach undergraduate students some basic graphics stuff. They have theoretical par about many aspects of computer graphics and practical part divided into 2D and 3D graphics API usage. OpenGL is, of course, related to the second part (3D graphics API). When I started, it was a question whether to use OpenGL, or D3D, or both. I chose OpenGL and never regretted. During "OpenGL part" of the course, students have to gain some basic skills including: porting OpenGL to Windows application, basic modeling and viewing (combining transformations etc.), lighting and texturing. For all that we've got just 4 weeks (8 hours of teaching and 3 labs). At the end they have a lecture about shaders, but they don't need it for the exam. If I try to teach them a modern approach, I'll spend all the time just for the setup. Also, I'm not a fan of the libraries (extension handling, math, etc.). Implementing all these is a huge amount of work for just one month.

On the other hand, I'm planning a totally new course that will guide students through the whole 3D pipeline. It would be completely based on GLSL. In this part I have a question for the community. Should I base it on separate shader architecture, and should I use DSA approach?
It must follow GLSL specification, but be as clean and straightforward as possible.

Also, it would be nice to see some other one-semester CG curricula using OpenGL.

kRogue
06-04-2012, 01:31 PM
I am going to share my experiences with teaching 3D graphics.

One one hand, I prefer to do the maths first (projection and projection matrices, perspective correct interpolation, projective (aka clip) coordinates, normalized device coordinates, orientation, texturing- filters and mipmapping). But that is a hard ride for most students. Linear algebra skills are often quite lacking and there are so many concepts that need to be introduced at the same time. The above is needed to explain the anatomy of shader to just draw a triangle on the screen. In addition needing to go through explaining the differences between attributes, varyings and uniforms (or ins, out and uniforms for GL3+) and oh, and all the awful setup code to make a shader, etc.

On the other hand, starting with fixed function pipeline allows one to introduce each of these concepts a touch more gently and naturally. I am NOT talking doing multi-texturing or lighting with the fixed function pipeline, but the start material to just get a textured triangle on the screen. Once each of the concepts (projection, clip coordinates, normalized coordinates, texturing and orientation are pat), then one can move to a simple shader, and then move onto lighting, effects, etc [and avoid the clumsy fixed-function multi-texturing interface]. Additionally, starting from immediate mode and moving the glDrawElements (and friends) gives a natural easy way to explain the differences between ins, outs and uniforms. As a side thought, one can also use the immediate mode model to better explain emitvertex() of geometry shaders.



Should I base it on separate shader architecture, and should I use DSA approach?


I have another bit of advice: I would not start with SSO, but introduce later because again, there is a risk of too many concepts too soon. As for DSA, the ugly is that EXT_direct_state_access is just an extension with all the hairy warts and peculiarities of the fixed function pipeline past. Admitted the whole "bind-to-edit" thing just plain-flat sucks, but until the edit without bind is core, I would not teach DSA.

Additionally, although I have much more fun with desktop GL, a great deal of action commercially is GLES2.. which in my opinion just sucks to use at times and both SSO (though there is an extension for it on iOS) and DSA are not available on almost all GLES2 implementations. As a side note GLES2 has all sorts of "interesting issues" that are almost always unpleasant when encountered as a surprise...

If one is not worried about the embedded interfaces and can assume GL3+ (or rather a platform that has GL2.1 and GL3+ core) then a next section of the class would cover (if not already) buffer objects, uniform buffer objects, texture buffer objects and transform feedback.. and geometry shaders...

But if dong embedded, a section on tiled based renderers is a must when getting into render to texture.

I would strongly advise providing a header file with macro magicks to check for GL errors after each draw call in debug builds so students can more quickly pin point what GL call went bad [or use debug context, but that without macro magic won't give a line number or file].

Lastly, and sadly, a good section on the ugly fact (which is more horribly true in embedded) techniques for pin-pointing if an error is the coder or the driver [for desktop it is sooo much rarer, but on embedded, everyday is a fight].

If you get to teach GL4+ I am so envious... almost all my GL teaching is of the form of 1-week training for commercial and it is almost always GLES2 with that most of the time linear algebra skills are lacking.

Aleksandar
06-05-2012, 02:09 PM
I have another bit of advice: I would not start with SSO, but introduce later because again, there is a risk of too many concepts too soon.

If I start with separate shader objects they won't be aware that a concept of monolithic program exists. Personally, I'm still don't use separate shader objects, but if it is something widely used (or will be), maybe it is better to introduce the concept as soon as possible.


I would strongly advise providing a header file with macro magicks to check for GL errors after each draw call in debug builds so students can more quickly pin point what GL call went bad [or use debug context, but that without macro magic won't give a line number or file].

I'm using debug_output along with VS debugger. A call-stack pinpoints error precisely in most the cases.

Thanks for sharing your experience! The course I'm talking about is still under consideration. There should be a part about GPU architecture preceding 3D pipeline execution path, and also a part about debugging and profiling at the end of the course. When I come to the realization I'll consult you again. ;)

Ragnemalm
06-06-2012, 01:47 PM
Really? "Elegant and intuitive"? Are you sure you want to make that claim?

Partially. The basic API is elegant and intuitive, but certainly not all additions. As in many other APIs, additions are all to often tacked on without so much care for the design. Your example of texture combiners is one where OpenGL really went wrong in the fixed pipeline. It was hairy and too little too late. Shaders made it obsolete over night. I am happy that I never taught them in my courses but turned to shaders long ago for any blending problems beyond the basic ones. Another example where I am not all that happy is multitexturing, with the somewhat hairy multiple texture coordinates. But the current API can be as hairy in places. Shaders feel unnecessarily complex, especially the multiple ways to specify shader variables from the main program.

We can do better, the question is how.

Kopelrativ
06-06-2012, 11:12 PM
My favourite complaint about the API is the high dependency of global states. Much of the effort of programming design elsewhere has been into localizing states, reducing risks and side effects.

And yes, I know that the design of the GPU means the state changes have to be exposed to the programmer to allow for efficient applications. But it is a problematic part of the API.

Ragnemalm
06-09-2012, 12:29 AM
My favourite complaint about the API is the high dependency of global states. Much of the effort of programming design elsewhere has been into localizing states, reducing risks and side effects.

And yes, I know that the design of the GPU means the state changes have to be exposed to the programmer to allow for efficient applications. But it is a problematic part of the API.
And this is one of the good things in 3.2: Fewer hidden states to keep track of. No current matrices, light sources, texture coordinates that you set and forget. Any such carelessness is more visible today.

thokra
06-09-2012, 06:42 AM
Any such carelessness is more visible today.

I disagree, for instance with bind to modify remaining in imporant areas and especially when prototyping something quickly, such carelessness is still easy to miss. And it's not a matter of visibility, it's a matter of non-existence - something that existed earlier just isn't there anymore. That doesn't mean that in areas that remain, state is anymore visible than in areas that were removed. In larger systems, you still don't have any clue which buffer object is bound to GL_ELEMENT_ARRAY_BUFFER or if the current active texture unit has TEXTURE_2D and TEXTURE_CUBE_MAP set simultaneously, unless you don't have some strategy for tracking the state yourself, some strategy to bind to targets without collisions or some strategy to unbind everything right after usage. How many drawbuffers are active on the current FBO again? Is it a readbuffer? Oh, damn, that blend function isn't correct in this place. You can always use glGet*() to retrieve current state but nobody wants that in a real-time application. So, there's still plenty of state left that can lead to false results all over the place.

Alfonse Reinheart
06-09-2012, 07:46 AM
Those problems are all easily resolved by simply binding to modify and unbinding after the modification. Then, you don't have to care what's bound to GL_ELEMENT_ARRAY_BUFFER, because you know it's nothing.

Keep your changes local and you won't have a problem.


How many drawbuffers are active on the current FBO again? Is it a readbuffer?

I don't see how DSA helps with that. And what is a "readbuffer"?

thokra
06-09-2012, 08:27 AM
Keep your changes local and you won't have a problem

[..] or some strategy to unbind everything right after usage.

I already said so and personally I haven't done otherwise for a long time.


I don't see how DSA helps with that.

I never said it did. But you're right, I should have made a more clear separation between setting state(i.e. bind-to-modify) and determining which state has already been set, e.g. the current number of draw buffers.


And what is a "readbuffer".

For me, that's a FBO bound to the GL_READ_FRAMEBUFFER target. Yeah, I know that's ambiguous and usually refers to one ore more attachments. But personally I like to think of an FBO that permits read operations as a read-buffer and one that permits draw operations as a draw-buffer.

My point is that OpenGL hasn't become more transparent just because some state has been thrown out. The remaining state is still as opaque as it has been when matrix and attribute stacks and so forth were still present.

Alfonse Reinheart
06-09-2012, 09:34 AM
The problem being talked about was specifically dealing with old state which someone set for a previous object that doesn't apply to the next object to be rendered. The point being that the vast majority of state that needs to render something is bound into objects now. It doesn't matter if it's opaque or transparent; you just bind and render. You have an object, which is to be rendered with a particular VAO, using a particular shader, with a particular set of textures and uniform buffers, and rendered to a particular framebuffer.

As long as you have set these up correctly, the global state available that can trip up your rendering is much smaller than before. The most you have to look out for is your blending state. That's a substantial improvement in the locality of data that can break your rendering. That is, if rendering isn't working for some object, then you know it's a problem with that VAO, that shader, those textures, those uniform buffers, or that framebuffer. Or some of the still-global state.

The list of places that can be broken is fairly small.

Ragnemalm
06-11-2012, 10:39 PM
I disagree, for instance with bind to modify remaining in imporant areas and especially when prototyping something quickly, such carelessness is still easy to miss. And it's not a matter of visibility, it's a matter of non-existence - something that existed earlier just isn't there anymore. That doesn't mean that in areas that remain, state is anymore visible than in areas that were removed. In larger systems, you still don't have any clue which buffer object is bound to GL_ELEMENT_ARRAY_BUFFER or if the current active texture unit has TEXTURE_2D and TEXTURE_CUBE_MAP set simultaneously, unless you don't have some strategy for tracking the state yourself, some strategy to bind to targets without collisions or some strategy to unbind everything right after usage. How many drawbuffers are active on the current FBO again? Is it a readbuffer? Oh, damn, that blend function isn't correct in this place. You can always use glGet*() to retrieve current state but nobody wants that in a real-time application. So, there's still plenty of state left that can lead to false results all over the place.
Are we still talking about teaching? In a teaching situation, when the students are just getting started with OpenGL, I think there is a certain improvement. They don't need to keep track of much more than the current textures, shaders and arrays.

FBOs is, IMHO, a later step. Yes, FBOs are messy. When I teach FBOs, we don't bother with all possible configurations, they get a working one from me and can work from there and for most, that configuration is all they need.

Ragnemalm
06-15-2012, 11:24 PM
If I start with separate shader objects they won't be aware that a concept of monolithic program exists. Personally, I'm still don't use separate shader objects, but if it is something widely used (or will be), maybe it is better to introduce the concept as soon as possible.

I'd like to continue here a bit. The shaders-first approach is one that I had in mind, but in the end I went for the main program, minimal pass-though shaders and working with geometry first. But I am not sure I did the right thing. The simplest thing you can start with is to start writing vertex shaders, and then fragment shaders, before you even look at the main program.

We tried the shaders-first approach in some small projects separate from my main course and that worked pretty well, so maybe I should have taken that route anyway. Any other experiences, opinions? I can consider lectures about transformations, plus shading and basic lighting, and then a lab on shaders only. Then I can move to geometry, object representation, and have the next lab on the full program level. Texturing could be done without looking at the main program, but the more input data you have from the main program, the more you want to look at that data.

Aleksandar
06-17-2012, 10:03 AM
I would rather start with GL context setup and accessing extensions; then shaders setup; then buffer-objects, vertex-attributes and uniform setup while having default vertex and fragment shaders. I'll try to make a course that guides students through the pipeline, step-by-step. At the beginning they don't have to know anything about shaders coding. They'll have default VS and FS. After attributes/uniforms setup, they'll start to code VS. FS still stays a black box, until the last stage. Of course, after mastering VS, they'll know what is what in FS, but they'll have to pass through TS, GS and TF before reaching FS. After FS a FBO will be introduced. It seems quite reasonable to follow the flow of data and introduce operations as they emerge in the pipeline.

RickA
06-18-2012, 12:11 AM
I would rather start with GL context setup and accessing extensions; then shaders setup; then buffer-objects, vertex-attributes and uniform setup while having default vertex and fragment shaders. I'll try to make a course that guides students through the pipeline, step-by-step. At the beginning they don't have to know anything about shaders coding. They'll have default VS and FS. After attributes/uniforms setup, they'll start to code VS. FS still stays a black box, until the last stage. Of course, after mastering VS, they'll know what is what in FS, but they'll have to pass through TS, GS and TF before reaching FS. After FS a FBO will be introduced. It seems quite reasonable to follow the flow of data and introduce operations as they emerge in the pipeline.

That's interesting, I'm thinking for the course for my co-workers to do pretty much the exact opposite direction; FS -> VS -> uniform setups -> VBO + vertex attributes -> FBO

I'll see how it goes.

Aleksandar
06-19-2012, 01:44 PM
Please share your experience with us. :)

I still think the education course should follow the pipeline stream.
GL setup -> buffers -> drawing functions -> VS -> (TF) -> TS -> GS -> (TF) -> FS -> FBO
Transform feedback (TF) may be explained just after VS, or after GS. TS is enough complex per se, so I wouldn't make things harder at that point.

menzel
06-20-2012, 01:25 AM
Following in pipeline stream is in fact a very logical way to understand everything. But keep in mind that this way it takes very long to see 'colorful' results. From a motivation point of view getting the audience (=students) to create nice graphics on there own as quick as possible is key to keep them motivated. If you start on the other end with just FS you can say "ignore where the fullscreen quad and the textures come from, today we make nice effects and pretty images on a per fragment level" ;-)
When teaching graphics with OpenGL and not OpenGL with the needed background, I would start with transformations and ignore the GL specific setup steps for the first time, quickly move down the rest of the pipeline before I go back to look at the GL specific details of buffers, attribute locations etc (kind of a middle approach of the two extremes above).

Aleksandar
06-21-2012, 01:45 AM
From a motivation point of view getting the audience (=students) to create nice graphics on there own as quick as possible is key to keep them motivated.
You are right! I need to do something other to keep them motivated. ;)
FS should exist all the time. It would be very simple, but still has to support coloring and texturing. Lighting and texturing would be per vertex as long as FS becomes an active topic.


When teaching graphics with OpenGL and not OpenGL with the needed background, I would start with transformations and ignore the GL specific setup steps for the first time, quickly move down the rest of the pipeline before I go back to look at the GL specific details of buffers, attribute locations etc (kind of a middle approach of the two extremes above).
This would be a course on master studies. They already have enough knowledge about CG, transformations, lighting, texturing and legacy OpenGL.

Aleksandar
06-22-2012, 04:18 AM
Yesterday I read "Teaching a Shader-Based Introduction to Computer Graphics" by Ed Angel and Dave Shreider, published at IEEE Computer Graphics and Applications Vol.32 No.2 (March/April 2011), and I'm really disappointed.
There is just a brief introduction to GL2.0/GLSL1.1 (although they mentioned GL 3.1/4.1) and claim that it is feasible to make proper introductory course using shader based approach and OpenGL.
I had expected more scientific approach to evaluate program they presented. But, there is no evaluation or even a statistic how students react to a new approach. :(

Janika
06-22-2012, 08:22 AM
OpenGL is not for TEACHING, it's for LEARNING, so never TEACH OpenGL in computer graphics courses. Computer graphics should be introduced to students as an abstracted mathematical model instead. THEN you give students assignments to do on their own using OpenGL or whatever API. Implementing things in software is the way to go for better learning.

menzel
06-22-2012, 09:11 AM
OpenGL is not for TEACHING, it's for LEARNING, so never TEACH OpenGL in computer graphics courses. Computer graphics should be introduced to students as an abstracted mathematical model instead. THEN you give students assignments to do on their own using OpenGL or whatever API. Implementing things in software is the way to go for better learning.

Well first of, this thread is about teching OpenGL and not about teaching computer graphics in general, so your remark is quite off topic.

Second: even if discussing teaching 3D graphics I think we can agree on the fact that doing some practical work, implementing 3D algorithms etc. in any 3D API will help understanding the theoretical background. Sadly, from my experiance in teaching you have to force some students to do some practical stuff on there own, otherwise they will fail the tests (and have not learned anything). This means mandatory homework even for programming assignments. To be able to correct these in a finite time for lots of students (and to be able to give best support), the class has to agree on one API to work with. As there is only one modern API available that works on all major operating systems, we chose OpenGL.
On a side note: teaching the students practical stuff that lets them produce 'nice colorful images' is a very good motivation tool, a pure theoretical computer graphics course wouldn't by far motivate as much people to learn about this topic.

To conclude: I beleave evyone discussing there wants to discuss teaching OpenGL and everyone has there own motivation to do so, so a meta discussion about the usefullness of doing so will not help here (you might want to open a new thread if you want to discuss the question whether teaching OpenGL is a good idea in general).

mhagain
06-22-2012, 10:48 AM
OpenGL is not for TEACHING, it's for LEARNING, so never TEACH OpenGL in computer graphics courses. Computer graphics should be introduced to students as an abstracted mathematical model instead. THEN you give students assignments to do on their own using OpenGL or whatever API. Implementing things in software is the way to go for better learning.

I have to disagree with everything here. OpenGL is for neither teaching nor learning; it's for getting stuff on your screen. The abstracted mathematical model has it's place in linear algebra and other mathematics courses; a graphics course is for - among other things - showing students one practical - and quite cool - application of the theoretical stuff. Actually writing graphics code and seeing the results creates a positive feedback loop for students, both in terms of the graphics course and in terms of seeing the immediate and practical use for the more abstract material they've learned elsewhere.

Janika
06-22-2012, 12:02 PM
Maybe...I'm not sure how teachers structure their computer graphics courses these days...but don't get me wrong if I say a course called "OpenGL Programming" or so should have no place in academia.

Brandon J. Van Every
08-27-2012, 09:24 AM
Back in the day at Cornell U., computer classes were split into a 3-credit theory course and a 2-credit practicum/lab course. Taking the practicum was optional. Although theory courses sometimes did have non-trivial projects in them, practicum courses were, by design, focused on huge amounts of programming. Personally I feel this removes any pedagogical objections about how/where to teach OpenGL in academica. I think the idea of sending B.S. grads out into industry without any practical skills is completely silly, and grads who can't get jobs are bad for a department's reputation and funding. To wit, CS majors were required to take at least 3 practicum courses IIRC. I did that and I wasn't even a CS major, got my B.A. in Sociocultural Anthropology. :-) In my experience the practicum courses were way harder than the theory courses, although this may have partly been due to being a non-major. Nevertheless I had enough theory and practicum courses for a major. I was just a few math classes and GPAs shy of their reqs. So I don't think I'm totally off-base to say, those practicum courses were hard.

It has been quipped that academics think everyone should have exactly the same background as they themselves had. So, get your pith helmets out!

Carmine
08-27-2012, 11:00 AM
You are right! I need to do something other to keep them motivated. ;)
FS should exist all the time. It would be very simple, but still has to support coloring and texturing. Lighting and texturing would be per vertex as long as FS becomes an active topic.

This would be a course on master studies. They already have enough knowledge about CG, transformations, lighting, texturing and legacy OpenGL.
I agree with you here. I'll be teaching co-workers a class in OpenGL in a few weeks.
The students will be engineers who want to get things up on the screen quickly.
It will be a hands on course with simple, in-class assignments, and home works requiring programming.
I will be teaching fixed pipeline GL. Shaders would be far too complex for this type of audience.

My feeling about shaders is that they are for people who want to be professional OpenGL developers.
They are not for the rest of the world who want to learn how to do simple, 3D, graphics.
I get the feeling from reading the forums, that GLSL has actually discouraged novices from
trying to learn GL programming.

Brandon J. Van Every
08-28-2012, 05:29 AM
I get the feeling from reading the forums, that GLSL has actually discouraged novices from
trying to learn GL programming.

I dunno, when I got started PCs didn't have any 3d APIs. You had to code your own software renderers. How is shader programming greatly different than that? Sure it's hairy, but building 3d pipelines has always been hairy.

Alfonse Reinheart
08-28-2012, 08:16 AM
The students will be engineers who want to get things up on the screen quickly.

Then why are you teaching them OpenGL? Why would people who just want to draw lines and stuff be using a low-level rendering API? They shouldn't be using OpenGL directly, ever.

OpenGL is for graphics programmers. Engineers should be using tools graphics programmers make for them.

Carmine
08-28-2012, 10:14 AM
Then why are you teaching them OpenGL? Why would people who just want to draw lines and stuff be using a low-level rendering API? They shouldn't be using OpenGL directly, ever.

For the most part, you are right. 99% of the 'graphics' done by engineers can, and is, done using Excel, Mathematica, or plotting software with some 3D capability. But there is a limited demand for custom, 3D simulation development to handle unique situations. At my company of ~3,000, there are 5 or 6 engineers who spend some time (not full time) developing 3D, OpenGL, applications. These apps don't have to be as visually sophisticated as a computer game. Fixed pipeline GL addresses our needs nicely.


OpenGL is for graphics programmers. True for GLSL. But I think the people who developed original OpenGL did not think this way. Seems like they tried to design a library that anyone with some technical bent could pick up if they were interested. I think they succeeded.


Engineers should be using tools graphics programmers make for them. Engineers can't be pidgeon-holed so easily. Some just go to meetings, never doing any technical work. Some do technical work running in-house or commercial software. Others spend most of their time developing software. Slowly, but surely, 3D engineering visualization is becoming an expected ingredient in engineering analyses. I'm not just talking about CAD. I'm talking about 3D representations of complex scenes with many moving objects and accurate lighting.

There is a need for either commercial or custom 3D engineering visualization capability. I think it would be a shame if GL went in a direction that discouraged learning and use by anyone except those who intended to be professional, full time, graphics programmers.

Alfonse Reinheart
08-28-2012, 10:48 AM
I'm talking about 3D representations of complex scenes with many moving objects and accurate lighting.

Then those people need to hire graphics professionals. Either directly or by buying middleware written and supported by them.

Oh, and you're not getting anything remotely like "accurate lighting" from fixed-function.

Carmine
08-28-2012, 12:32 PM
Then those people need to hire graphics professionals. Maybe that will happen eventually.



Either directly or by buying middleware written and supported by them. There is some 'middleware' floating around. It gets very limited use at my company. It might be a growth area. Problem is it's hard to anticipate all the types of visualization problems that might pop up in engineering (a very broad field). The more general the 'middleware' is, the steeper the learning curve until you get to the point where it's almost as easy to learn GL (fixed pipeline).


..you're not getting anything remotely like "accurate lighting" from fixed-function. Yes. I realize that. Gouraud shading from a single light source is usually good enough for our purposes (i.e. engineering, not architectural). Basic shadow casting is useful too, which we can do using fixed pipeline GL.

Brandon J. Van Every
08-29-2012, 05:49 AM
The more general the 'middleware' is, the steeper the learning curve until you get to the point where it's almost as easy to learn GL (fixed pipeline).

You seem to be saying that even the old school fixed function pipeline stuff was complicated and intimidating enough that real effort had to be put into dealing with it. So... modern OpenGL just cranks the dial up a few notches. :-) Is the problem all that greatly changed from a "gotta learn something" perspective?

To me the biggest stumbling block compared to writing software rasterizers from scratch, is nowadays the huge API spanning 8 recent revisions with uncertain extension support, and no ability to just use a C function, you've gotta indirect a pointer through the right rendering context, which jolly well have better have the right flags set, and redundant windows created and destroyed in the right order. I never had to deal with that mess when just blasting values into a dumb frame buffer. Once I had done the grungy OS-specific window and framebuffer setup, everything could be expressed in straight C++/ASM code with no API calls. Systemically it's a lot easier to conceptualize, even if you have to do various low level jobs yourself. We put up with 3D APIs because we want performance, not because they're easier to use. We can hope that someday the CPUs will be fast enough that we don't need GPUs anymore, although the concerns of parallelism might turn CPUs into something more horrible anyways, and then we'll be back to square one. We need some strong AI to save us from this guff.

nigels
08-29-2012, 07:07 AM
The actual OpenGL language is going to change radically over the next 10 years as hardware changes; so teaching them the nitty gritty is mostly wasted.

When I was teaching intro OpenGL at a university I grappled with "modernizing" too.
I expect the content has evolved since then, but I hope students are not struggling
with arrays and pointers and comp vs core vs ES - but I guess that's the landscape.

One option I'd like to point out for educators is an effort to emulate non-core
functionality (call it "deprecated" if you must!) such as immediate mode and
fixed-function lighting. It's an open-source library called Regal, available at
github, developed and tested on Windows, OSX, Linux, iOS, Android, and recently,
Google Native Client (NaCL).

https://github.com/p3/regal

- Nigel

nigels
08-29-2012, 07:12 AM
OpenGL is for graphics programmers. Engineers should be using tools graphics programmers make for them.

I disagree - as a Computer Science grad that has worked with a LOT of "real" engineers over the years - I've seen OpenGL put to various ingenious uses.
I think it would be a shame if OpenGL itself became inaccessible to non-professionals.

- Nigel