Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 5 123 ... LastLast
Results 1 to 10 of 44

Thread: Teaching OpenGL

  1. #1
    Intern Newbie
    Join Date
    Jan 2010
    Location
    Linköping, Sweden
    Posts
    46

    Teaching OpenGL

    From another thread:

    Quote Originally Posted by Aleksandar View Post
    Hey guys, how about moving discussion to another thread? All stuff now relates to teaching OpenGL and have nothing in common with display lists.

    Teaching graphics is really interesting topics. Since 2004 I've been thinking of switching to pure shader based OpenGL course as a part of Computer Graphics course on undergraduate studies, but until now nothing is changed. I've been discourage by my coleagues, suggesting that shader based aproach is complicated for that kind of course. Even with immediate mode rendering it is pretty hard for students to do everything I give them for lab exercises. Since this is the first and only CG course, there are a lot of concepts they have to accept.

    As you've probably noticed, I want to do everything by myself, and that's the policy I imposed to my students in order to make them understand what's really happening under the hood. Writting a framework for dealing with shaders is not a simple task. But if I do it for them, they'll probably copy that framewor for other projects as being ultimate solution, which is certainly not. We have to give students knowledge how to use OpenGL not certain home-built framework.
    I teach two graphics courses, one purely about CG and one where advanced CG topics relevant for game programming are a large part. The first one is extremely old, founded some time in the early 80's, and I got it on my hands a decade ago. When I got it it had no ambition for high performance whatsoever, which I immediately changed. OpenGL got in soon. Shaders entered in 2004 when I started the second course. At that time, a"pure shader" was out of the question. Shaders were moved to the first in 2008 (because the topic was so important that it couldn't wait for the second). This winter, I kicked out immediate mode for good, for the simple reason that 3.2 is finally propagated far enough to be considered commonplace. (There were still students in the course who had problems with unsupported OSes.)

    This was a major revision, of course. All labs were completely rewritten, much of my course book was rewritten, and all lectures had to be revised. Much work, lots of just-in-time changes. The response from the students was very positive. They liked to know that they were taught modern CG, and the price in complexity was worth it (and it wasn't as bad as it seems since all transform code turns into straight math operations and you actually win some ground there, simplification). The students performed at least as well as before (I would say that the number of downright amazing student projects grew), and in the course evaluation, they liked my approach and I even got praise for my overdrive-speed-rewritten text book as one of the best they had had (despite all the glitches that have to follow a fast revision). So the move was quite successful so I think it was worth the work.

    The question of support code, frameworks, add-ons, whatever you may call them, is more relevant than before. In the labs, we use quite a bit of in-house support code for managing shaders, loading textures and models and for math operations, and that package had to grow when moving to 3.2.

    So if you teach OpenGL, have you moved to 3.2 or similar non-depricated OpenGL, and how was the move?

  2. #2
    Member Regular Contributor
    Join Date
    Jan 2012
    Location
    Germany
    Posts
    325
    I moved to 3.2 core teaching nearly a year ago. One open question was how many students still have older hardware or Intel embedded with lower specs. So the first thing to do in the semester was to let all students query there OpenGL version and compile and run our quickly developed framework (they didn't get everything from the beginning but just a GLFW application with GLM and GLEW(GLEW was patched to support core profile on MacOS X)). Turns out: from nearly 100 students only 4 or 5 (I don't have the numbers at hand) had too old GPUs or were Mac users who didn't upgrade to 10.7 (hint: on OS X you have to request a 3.2 core context, not 3.1, not 3.0, not one without a version number and hope for the latest, exactly 3.2). Those got an account for our lab.

    Code to load shaders from files, print compile and linker errors/warning etc. is probably the most important support. Also the basic application where they had to implement the draw calls depending on the assignment checked for OpenGL errors, even initiated ARB_debug_output callbacks if that was supported. When we discussed VBOs, I let them code all the GL calls themself, when we discussed shader compiling and linking I let them code that as well. After each leasson they get classes to do this for them.

    Yes, you have to hide a lot of complexity and only focus on one problem/asperct per week, but it is doable to teach pure 3.2 core in an introductory CG course. The feedback was good and it's even more obvious to the students why we teach the basics and the math behind graphics: you have to implement it on your own in the shaders (these is no magic gluLookAt and glVertex() so you don't need to understand the modelview etc.).

    On a personal note, I'm strongly against teaching the immediate mode in 2012 as providing one vertex at a time (and a predefined set of vertex attributes) is not how the GPU works and it's not how graphics work anymore. If I would teach my students outdated stuff with no relevance to the real world that does not even support them in understanding relevant basics, I would fail as a teacher IMHO.


    tl;dr:
    * there is no reason to stick to the immediate mode for teaching
    * teaching core and shaders from the beginning works
    * udate your curriculum

  3. #3
    Intern Newbie
    Join Date
    Jan 2010
    Location
    Linköping, Sweden
    Posts
    46
    Quote Originally Posted by menzel View Post
    I moved to 3.2 core teaching nearly a year ago. One open question was how many students still have older hardware or Intel embedded with lower specs. So the first thing to do in the semester was to let all students query there OpenGL version and compile and run our quickly developed framework (they didn't get everything from the beginning but just a GLFW application with GLM and GLEW(GLEW was patched to support core profile on MacOS X)). Turns out: from nearly 100 students only 4 or 5 (I don't have the numbers at hand) had too old GPUs or were Mac users who didn't upgrade to 10.7 (hint: on OS X you have to request a 3.2 core context, not 3.1, not 3.0, not one without a version number and hope for the latest, exactly 3.2). Those got an account for our lab.
    OS X is indeed an important reason to use 3.2 and no other version. There were Mac users with older versions who had to upgrade (myself included) but there were also some free Unix that didn't support 3.2. But as long as we have a lab with usable computers, supporting the student's own computers is a bit of luxuary.

    We made a stripped-down GLUT clone for OSX, although FreeGLUT should work as well. Apple's built-in GLUT is, however, not updated.
    Code to load shaders from files, print compile and linker errors/warning etc. is probably the most important support. Also the basic application where they had to implement the draw calls depending on the assignment checked for OpenGL errors, even initiated ARB_debug_output callbacks if that was supported. When we discussed VBOs, I let them code all the GL calls themself, when we discussed shader compiling and linking I let them code that as well. After each leasson they get classes to do this for them.
    I didn't have them writing shader loading code, but they did work directly with VBOs and VAOs, as well as writing their own "look-at" function.
    Yes, you have to hide a lot of complexity and only focus on one problem/asperct per week, but it is doable to teach pure 3.2 core in an introductory CG course. The feedback was good and it's even more obvious to the students why we teach the basics and the math behind graphics: you have to implement it on your own in the shaders (these is no magic gluLookAt and glVertex() so you don't need to understand the modelview etc.).

    On a personal note, I'm strongly against teaching the immediate mode in 2012 as providing one vertex at a time (and a predefined set of vertex attributes) is not how the GPU works and it's not how graphics work anymore. If I would teach my students outdated stuff with no relevance to the real world that does not even support them in understanding relevant basics, I would fail as a teacher IMHO.

    tl;dr:
    * there is no reason to stick to the immediate mode for teaching
    * teaching core and shaders from the beginning works
    * udate your curriculum
    Up to OpenGL 2.1, starting with Immediate Mode was kind of customary, no least since both the Red Book, Angel and Hearn&Baker all did it. So I kind of followed their example, but only for the very first lab and then we moved to glDrawElements. But when I see the effect of skipping it, I believe that that habit was indeed not a good one. The students did get a comfortable backdoor, which I often used myself for pure prototyping purposes, but some never realized that the subsequent move to arrays was important and got stuck with immediate mode and of course they got performance problems. Now, when they get the VAOs as primary tools, nobody has that problem. They get a tougher start, but a better result.

    But still, I am uncomfortable with seeing a so much less intuitive API. The old one was more self-explanatory. The move was necessary for performance reasons, but I ask myself if we couldn't package it differently. We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?

  4. #4
    Super Moderator OpenGL Lord
    Join Date
    Dec 2003
    Location
    Grenoble - France
    Posts
    5,580
    Quote Originally Posted by Ragnemalm View Post
    We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?
    I find it was elegant a long time ago, before shaders.
    But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.

  5. #5
    Intern Newbie
    Join Date
    Jan 2010
    Location
    Linköping, Sweden
    Posts
    46
    Quote Originally Posted by ZbuffeR View Post
    I find it was elegant a long time ago, before shaders.
    But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.
    There is of course one problem to take into account: that OpenGL fits any language. I think that is a great feature that makes it last, you can bring the OpenGL knowledge over language and OS borders. That means no class libraries, not locking it into one specific object model or other language specific constructs. But I wouldn't rule out the possibility to make a smoother interface that is still so close to OpenGL that it doesn't hide it but just help it.

    Do we have anyone here teaching OpenGL who havn't taken the 3.2-or-similar jump, or who did it and have other experiences?

  6. #6
    Junior Member Newbie RickA's Avatar
    Join Date
    Sep 2006
    Location
    NL
    Posts
    21
    I'm about to start teaching 3D graphics to some co-workers soon (post university level) and I've decided to go with WebGL (which is equal to OpenGL ES 2.0). That'll work cross-platform, no compiling speeds up iteration time, and thus I'm hoping will allow more experimentation and a greater understanding of the material. I thought it would make more sense to start with shaders first, and then move backward to the vertex submission pipeline, but I get the impression that you guys all start with the vertex submission.

  7. #7
    Intern Newbie
    Join Date
    Jan 2010
    Location
    Linköping, Sweden
    Posts
    46
    Quote Originally Posted by RickA View Post
    I'm about to start teaching 3D graphics to some co-workers soon (post university level) and I've decided to go with WebGL (which is equal to OpenGL ES 2.0). That'll work cross-platform, no compiling speeds up iteration time, and thus I'm hoping will allow more experimentation and a greater understanding of the material. I thought it would make more sense to start with shaders first, and then move backward to the vertex submission pipeline, but I get the impression that you guys all start with the vertex submission.
    I considered starting with shaders, and that is still an open question to me. If I start with shaders I must provide a bigger main program, but if they are not supposed to look at that for the first lab, that doesn't have to be a problem. The advantage would be that the students can get a firm grip of the shader concept before dealing with the details of the main program.

    WebGL is much hyped, and of course web browser based graphics has its value, but using a scripting language will bog down the CPU side. Are we using inefficient scripting languages just because we have too slow compilers? Too much energy is wasted on scripting engines already. I heard about JIT compilers for JavaScript that might help, but even JIT compiled Java is pretty inefficient so I don't expect much better from JavaScript. WebGL is included in my course at the end, as one of the "where to go from here" subjects, but I hesitate to go there for the whole course. Lower performance, no free choice of programming language any more, but you get it in the web browser. Not obviously the right path.

  8. #8
    Junior Member Newbie RickA's Avatar
    Join Date
    Sep 2006
    Location
    NL
    Posts
    21
    I'm glad to hear I'm not the only one who has thought of starting with shaders.

    Quote Originally Posted by Ragnemalm View Post
    WebGL is much hyped, and of course web browser based graphics has its value, but using a scripting language will bog down the CPU side. Are we using inefficient scripting languages just because we have too slow compilers? Too much energy is wasted on scripting engines already. I heard about JIT compilers for JavaScript that might help, but even JIT compiled Java is pretty inefficient so I don't expect much better from JavaScript. WebGL is included in my course at the end, as one of the "where to go from here" subjects, but I hesitate to go there for the whole course. Lower performance, no free choice of programming language any more, but you get it in the web browser. Not obviously the right path.
    I actually agree with you on this, if you have control over the entire curriculum. In my case this is just an after hours course for interested co-workers, and I'm not positive they all know C++ (or any other language). They'll likely not all know Javascript either, but that's extremely easy to pick up. I'll be sure to give them pointers on where to go for native OpenGL info though. As for performance, static geometry is not an issue at all of course, since that never leaves the GPU once it's there.

  9. #9
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    But still, I am uncomfortable with seeing a so much less intuitive API. The old one was more self-explanatory. The move was necessary for performance reasons, but I ask myself if we couldn't package it differently. We used to have an elegant and intuitive API, while now we have a cruder API which requires an intermedate layer between itself and a larger application. Is that really a sign of a good API?
    Really? "Elegant and intuitive"? Are you sure you want to make that claim?

    Very well; let's test that theory. Here's a simple fragment shader that takes a texture color, uses the alpha to blend it with a uniform color (for, say, team colors in an RTS game), and then multiplies that with vertex-calculated lighting:

    Code :
    in vec4 lightFactor;
    out vec4 outputColor;
     
    uniform sampler2D diffuseSampler;
    uniform vec4 baseColor;
     
    void main()
    {
      vec4 maskingDiffuse = texture(diffuseSampler, texCoord);
      vec4 diffuseColor = mix(maskingDiffuse, baseColor, maskingDiffuse.a);
      outputColor = diffuseColor * lightFactor;
    }

    And here's the same thing, only done with fixed-function GL 2.1 stuff:

    Code :
    glActiveTexture(GL_TEXTURE0);
    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE)
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE)
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_INTERPOLATE)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_TEXTURE)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_CONSTANT)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_TEXTURE)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_TEXTURE)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_CONSTANT)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_ALPHA, GL_TEXTURE)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_ALPHA, GL_SRC_ALPHA)
     
    glActiveTexture(GL_TEXTURE1);
    glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE)
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE)
    glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_MODULATE)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_PRIMARY)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS)
    glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_PRIMARY)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA)
    glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA)

    Marvel at the more "elegant and intuitive API".

    Without looking it up in the documentation, can you even tell that the env_combine version does the same thing as the GLSL version? It's really easy to forget what you have to do in non-shader GL if you want to do anything out of the ordinary. That's why people say that the older API was more "elegant"; because they never tried to do something with it besides blast a vertex-lit triangle with a single texture onto the screen.

    The problem with fixed-function learning is that it stunts growth. Yes, if all you want to do is blast a triangle on the screen, it's easy. If all you want to do is apply some lighting to it, you can use simple rules. Even with a texture. But if you actually want to think, if you want to do anything even slightly unorthodox or out of the ordinary, welcome to ARB_texture_env_combine hell, where it takes over 10 lines of code for a simple multiply.

    This makes people want to avoid doing interesting things. It forces them to think only in terms of the most basic, simple fixed-function functionality. Yes, you may learn fast initially, but you learn not to think for yourself, and you learn not to wander outside of the simple, fixed-function box. It gives you a false sense that you actually know something, just because you were able to throw a lit, textured object onto the screen quickly.

    When it comes time for them to learn what's actually going on, they have no idea how to do that. When it comes time for shaders, you have to basically start over in teaching them. You may as well start off the right. It may be slower going initially, but it's great once you get the hang of it. And they have active agency in their learning and real understanding of what's going on.

  10. #10
    Senior Member OpenGL Pro
    Join Date
    Jan 2012
    Location
    Australia
    Posts
    1,106
    Quote Originally Posted by ZbuffeR View Post
    But GL as an API must be a very efficient way to talk to the hardware. If one want a more user-friendly API, it should be on top of OpenGL.
    I have to agree.

    I think it is important when talking about teaching you make it clear whether your students are under grads or someone who needs to learn OpenGL to use professionally. For my 2 bits; if you are talking about under grads, I feel a water-down version that focus on the fundamentals of graphics is more important. The actual OpenGL language is going to change radically over the next 10 years as hardware changes; so teaching them the nitty gritty is mostly wasted. I find it very frustating when I hire a graduate that knows say OpenGL but when you try to move them to DirectX are completely lost because of the lack a lot of fundamentals. Knowing the basic interface between the GPU and CPU is good but I don't think it matters if they know all the calls to compile a shader.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •