Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 2 of 5 FirstFirst 1234 ... LastLast
Results 11 to 20 of 44

Thread: Teaching OpenGL

  1. #11
    Intern Newbie
    Join Date
    May 2010
    Posts
    38
    The problem with fixed-function learning is that it stunts growth. Yes, if all you want to do is blast a triangle on the screen, it's easy. If all you want to do is apply some lighting to it, you can use simple rules. Even with a texture. But if you actually want to think, if you want to do anything even slightly unorthodox or out of the ordinary, welcome to ARB_texture_env_combine hell, where it takes over 10 lines of code for a simple multiply.

    This makes people want to avoid doing interesting things. It forces them to think only in terms of the most basic, simple fixed-function functionality. Yes, you may learn fast initially, but you learn not to think for yourself, and you learn not to wander outside of the simple, fixed-function box. It gives you a false sense that you actually know something, just because you were able to throw a lit, textured object onto the screen quickly.

    When it comes time for them to learn what's actually going on, they have no idea how to do that. When it comes time for shaders, you have to basically start over in teaching them. You may as well start off the right. It may be slower going initially, but it's great once you get the hang of it. And they have active agency in their learning and real understanding of what's going on.
    Completely agree

  2. #12
    Member Regular Contributor
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    325
    WebGL from the OpenGL side is fine as it is basically OpenGL ES which is more like core than compatibility. If driving the GPU from a scripting language is such a good idea is debatable, but as a way of learning graphics programming it might be fine.
    I start (after some basic math) by describing the renderingpipeline without the fragmentshaders, just vertex processing, clipping, rasterization and vertex transformations (rotation, translation by matrix multiplication). Then implementing transformations and projections using GLM. Them moving some of this code to the vertex shader and loading shaders. In parallel the theory part can handle lighting, textures, aliasing (and thus fill the gaps in the renderingpipeline). Then try out lighting and later texturing via fragment shaders. In between we have to look at VBOs/VAOs in more detail as we need user defined vertex attributes for the lighting (normals) and texture coordinates.

    MacOS X is in fact a reason to stick with 3.2 but on the other hand more than 3.3 is not so common right now and even working with geometry shaders might be too much for an introductory course (the theory part however discusses what GS and tessellation shaders are for, but just on a level of what you normally do with them and where they fit into the pipeline).

  3. #13
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,079
    Teaching Computer Graphics Basics using OpenGL and teaching high-end programming skills using OpenGL are two different things.

    For example, I have classes where I have to teach undergraduate students some basic graphics stuff. They have theoretical par about many aspects of computer graphics and practical part divided into 2D and 3D graphics API usage. OpenGL is, of course, related to the second part (3D graphics API). When I started, it was a question whether to use OpenGL, or D3D, or both. I chose OpenGL and never regretted. During "OpenGL part" of the course, students have to gain some basic skills including: porting OpenGL to Windows application, basic modeling and viewing (combining transformations etc.), lighting and texturing. For all that we've got just 4 weeks (8 hours of teaching and 3 labs). At the end they have a lecture about shaders, but they don't need it for the exam. If I try to teach them a modern approach, I'll spend all the time just for the setup. Also, I'm not a fan of the libraries (extension handling, math, etc.). Implementing all these is a huge amount of work for just one month.

    On the other hand, I'm planning a totally new course that will guide students through the whole 3D pipeline. It would be completely based on GLSL. In this part I have a question for the community. Should I base it on separate shader architecture, and should I use DSA approach?
    It must follow GLSL specification, but be as clean and straightforward as possible.

    Also, it would be nice to see some other one-semester CG curricula using OpenGL.

  4. #14
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    578
    I am going to share my experiences with teaching 3D graphics.

    One one hand, I prefer to do the maths first (projection and projection matrices, perspective correct interpolation, projective (aka clip) coordinates, normalized device coordinates, orientation, texturing- filters and mipmapping). But that is a hard ride for most students. Linear algebra skills are often quite lacking and there are so many concepts that need to be introduced at the same time. The above is needed to explain the anatomy of shader to just draw a triangle on the screen. In addition needing to go through explaining the differences between attributes, varyings and uniforms (or ins, out and uniforms for GL3+) and oh, and all the awful setup code to make a shader, etc.

    On the other hand, starting with fixed function pipeline allows one to introduce each of these concepts a touch more gently and naturally. I am NOT talking doing multi-texturing or lighting with the fixed function pipeline, but the start material to just get a textured triangle on the screen. Once each of the concepts (projection, clip coordinates, normalized coordinates, texturing and orientation are pat), then one can move to a simple shader, and then move onto lighting, effects, etc [and avoid the clumsy fixed-function multi-texturing interface]. Additionally, starting from immediate mode and moving the glDrawElements (and friends) gives a natural easy way to explain the differences between ins, outs and uniforms. As a side thought, one can also use the immediate mode model to better explain emitvertex() of geometry shaders.

    Should I base it on separate shader architecture, and should I use DSA approach?
    I have another bit of advice: I would not start with SSO, but introduce later because again, there is a risk of too many concepts too soon. As for DSA, the ugly is that EXT_direct_state_access is just an extension with all the hairy warts and peculiarities of the fixed function pipeline past. Admitted the whole "bind-to-edit" thing just plain-flat sucks, but until the edit without bind is core, I would not teach DSA.

    Additionally, although I have much more fun with desktop GL, a great deal of action commercially is GLES2.. which in my opinion just sucks to use at times and both SSO (though there is an extension for it on iOS) and DSA are not available on almost all GLES2 implementations. As a side note GLES2 has all sorts of "interesting issues" that are almost always unpleasant when encountered as a surprise...

    If one is not worried about the embedded interfaces and can assume GL3+ (or rather a platform that has GL2.1 and GL3+ core) then a next section of the class would cover (if not already) buffer objects, uniform buffer objects, texture buffer objects and transform feedback.. and geometry shaders...

    But if dong embedded, a section on tiled based renderers is a must when getting into render to texture.

    I would strongly advise providing a header file with macro magicks to check for GL errors after each draw call in debug builds so students can more quickly pin point what GL call went bad [or use debug context, but that without macro magic won't give a line number or file].

    Lastly, and sadly, a good section on the ugly fact (which is more horribly true in embedded) techniques for pin-pointing if an error is the coder or the driver [for desktop it is sooo much rarer, but on embedded, everyday is a fight].

    If you get to teach GL4+ I am so envious... almost all my GL teaching is of the form of 1-week training for commercial and it is almost always GLES2 with that most of the time linear algebra skills are lacking.

  5. #15
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,079
    Quote Originally Posted by kRogue View Post
    I have another bit of advice: I would not start with SSO, but introduce later because again, there is a risk of too many concepts too soon.
    If I start with separate shader objects they won't be aware that a concept of monolithic program exists. Personally, I'm still don't use separate shader objects, but if it is something widely used (or will be), maybe it is better to introduce the concept as soon as possible.

    Quote Originally Posted by kRogue View Post
    I would strongly advise providing a header file with macro magicks to check for GL errors after each draw call in debug builds so students can more quickly pin point what GL call went bad [or use debug context, but that without macro magic won't give a line number or file].
    I'm using debug_output along with VS debugger. A call-stack pinpoints error precisely in most the cases.

    Thanks for sharing your experience! The course I'm talking about is still under consideration. There should be a part about GPU architecture preceding 3D pipeline execution path, and also a part about debugging and profiling at the end of the course. When I come to the realization I'll consult you again.

  6. #16
    Intern Newbie
    Join Date
    Jan 2010
    Location
    Linköping, Sweden
    Posts
    46
    Quote Originally Posted by Alfonse Reinheart View Post
    Really? "Elegant and intuitive"? Are you sure you want to make that claim?
    Partially. The basic API is elegant and intuitive, but certainly not all additions. As in many other APIs, additions are all to often tacked on without so much care for the design. Your example of texture combiners is one where OpenGL really went wrong in the fixed pipeline. It was hairy and too little too late. Shaders made it obsolete over night. I am happy that I never taught them in my courses but turned to shaders long ago for any blending problems beyond the basic ones. Another example where I am not all that happy is multitexturing, with the somewhat hairy multiple texture coordinates. But the current API can be as hairy in places. Shaders feel unnecessarily complex, especially the multiple ways to specify shader variables from the main program.

    We can do better, the question is how.

  7. #17
    Junior Member Regular Contributor Kopelrativ's Avatar
    Join Date
    Apr 2011
    Posts
    214
    My favourite complaint about the API is the high dependency of global states. Much of the effort of programming design elsewhere has been into localizing states, reducing risks and side effects.

    And yes, I know that the design of the GPU means the state changes have to be exposed to the programmer to allow for efficient applications. But it is a problematic part of the API.

  8. #18
    Intern Newbie
    Join Date
    Jan 2010
    Location
    Linköping, Sweden
    Posts
    46
    Quote Originally Posted by Kopelrativ View Post
    My favourite complaint about the API is the high dependency of global states. Much of the effort of programming design elsewhere has been into localizing states, reducing risks and side effects.

    And yes, I know that the design of the GPU means the state changes have to be exposed to the programmer to allow for efficient applications. But it is a problematic part of the API.
    And this is one of the good things in 3.2: Fewer hidden states to keep track of. No current matrices, light sources, texture coordinates that you set and forget. Any such carelessness is more visible today.

  9. #19
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    Any such carelessness is more visible today.
    I disagree, for instance with bind to modify remaining in imporant areas and especially when prototyping something quickly, such carelessness is still easy to miss. And it's not a matter of visibility, it's a matter of non-existence - something that existed earlier just isn't there anymore. That doesn't mean that in areas that remain, state is anymore visible than in areas that were removed. In larger systems, you still don't have any clue which buffer object is bound to GL_ELEMENT_ARRAY_BUFFER or if the current active texture unit has TEXTURE_2D and TEXTURE_CUBE_MAP set simultaneously, unless you don't have some strategy for tracking the state yourself, some strategy to bind to targets without collisions or some strategy to unbind everything right after usage. How many drawbuffers are active on the current FBO again? Is it a readbuffer? Oh, damn, that blend function isn't correct in this place. You can always use glGet*() to retrieve current state but nobody wants that in a real-time application. So, there's still plenty of state left that can lead to false results all over the place.
    Last edited by thokra; 06-09-2012 at 06:48 AM.

  10. #20
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Those problems are all easily resolved by simply binding to modify and unbinding after the modification. Then, you don't have to care what's bound to GL_ELEMENT_ARRAY_BUFFER, because you know it's nothing.

    Keep your changes local and you won't have a problem.

    How many drawbuffers are active on the current FBO again? Is it a readbuffer?
    I don't see how DSA helps with that. And what is a "readbuffer"?

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •