Which OpenGL Version

I know OpenGL 2 has some if not almost all functions deprecated by OpenGL 3 thus making it the stay away versions as said by some OpenGL programmers. Currently though I only have a laptop which support OpenGL 2.1 but want to start on learning OpenGL. On some websites I have read that you could go and learn OpenGL 2 but then you would have to avoid/dodge on using those functions which are deprecated, problem is that I wouldn’t know which is which when it comes to deprecated and not, even if I did, I’d say it would pretty much consume a considerable amount of time giving me more problems on learning the API and so back to the same question if I should learn 2 or not. Seeing that programs made using OpenGL 2 will then be hindered useless on modern software, learning OpenGL 2 then is as much as wasting your time. With this in mind how am I suppose to tackle my current situation while refraining(as much as possible) on buying new hardware.

Also I have read that MesaGL on linux have their implementation of OpenGL to 3. If I’m not wrong Mesa3d is a driver on which can be used on either Intel, Nvidia, AMDs, please do correct me if I’m wrong. If so, can I use MesaGL to program for modern OpenGL then? Thanks in advance.

By the way, is there any way of providing the deprecated functionalities in any means like providing the missing functionalities with some kind of files?

So does OpenGL 3.0. OpenGL 3.1 actually removed the deprecated features (except that they were still available via the GL_ARB_compatibility extension). OpenGL 3.2 introduced profiles; the core profile lacks the deprecated features, the compatibility profile retains them.

[QUOTE=Preedne;1263572]Currently though I only have a laptop which support OpenGL 2.1 but want to start on learning OpenGL. On some websites I have read that you could go and learn OpenGL 2 but then you would have to avoid/dodge on using those functions which are deprecated, problem is that I wouldn’t know which is which when it comes to deprecated and not, even if I did, I’d say it would pretty much consume a considerable amount of time giving me more problems on learning the API and so back to the same question if I should learn 2 or not.[/QUOTE].
I think so. IMHO, it’s easier to learn OpenGL by using some of the deprecated features then stop using them once you’re more familiar with the general concepts.

The main problem with learning “modern” OpenGL from the outset is that you need to understand a number of features before you can get a single triangle on screen. Without the fixed function pipeline, you have to be able to write GLSL vertex and fragment shaders, compile and link them, and query (or set) uniform and attribute locations, set uniforms and bind attribute arrays. Without client-side arrays, you have to be able to create, allocate, populate and use buffer objects. Without the matrix operations, you have to either write your own matrix functions or use an additional library (which, depending upon the presence or absence of Murphy’s Law, might add anywhere between 5 minutes and a day to the time taken to get your first program to even compile, let alone run).

Once you’re familiar with OpenGL, none of this is an issue. Compared to the complexity of any “real” program, the boilerplate involved is trivial. But it does mean that “hello, world” programs are significantly more complex when using the modern approach compared to the legacy approach.

Mesa itself is up to 3.3, but many of the features require support from hardware. If your laptop only supports OpenGL 2, it may be because the video hardware isn’t capable of some of the OpenGL 3 features.

Thank you for the thorough reply. So if I got this right, then using the GL_ARB_compatibility extension for OpenGL 3.1 and using compatibility profile for OpenGL 3.2 would mean that programs running on OpenGL 2 would still run? If so, then I guess I would learn OpenGL as you and the others have said. OpenGL 2 first then modern OpenGL for speed and better effects.

Here’s my take on the subject.

Is it more complex to learn in the beginning? In a certain sense, yes. But that is only in the sense is that you have to actually understand what’s going on. You have to know that there’s a rendering pipeline and that data flows from one end to the other.

You are able to learn the FFP without actually having the slightest clue what it means. You can “learn” that glNormal associates normals with vertices. You can “learn” that glLight calls setup lights that act on the normals to create the effect of illumination. You can “learn” that glColor associates colors with vertices.

But do you actually understand what’s happening? Take what I just talked about: normals, colors, and lights. Does a person who learned these with FFP OpenGL truly understand it? Do they realize that lighting is done per-vertex? Do they understand why glColorMaterial is needed, and what it’s doing internally? Do they understand the difference between “diffuse” and “specular” lighting, or do they just fiddle around with values until it looks “OK”? And so forth.

Fixed-function OpenGL allows you to have the illusion of understanding, without the certain knowledge. You are allowed to think that you know more than you do, because it’s simpler.

And this kind of knowledge is not sophistry; it’s crucial if you’re really going to solve problems with OpenGL and graphics. Take the above knowledge, now add textures. Well… what does a texture really mean? If you don’t understand how the per-vertex color really interacts with the lighting equation, how do you explain that a texture just allows you to vary that color on a per-fragment basis? Because it’s that final understanding which is important in the world of shaders.

It is possible to learn FFP OpenGL to a degree that you actually understand what’s going on behind the scenes. But it will take far more time and effort. And much of this time and effort will be wasted.

For example, I can explain the details and intricacies of glTexEnv in all of its myriad insanity. Is that knowledge of any value in a shader world? Was the time I spent learning it of use? Mostly, no. Oh, I learned about rasterization and fragments; that’s useful. I learned about the interpolation of per-vertex outputs. But the specific details of the “texture environment” functions?

Completely useless now.

Furthermore, there’s a lot of stuff that you can leave out of the early lessons. Uniforms are not needed in “hello world”. You still have to do attributes, but there needs be no querying of them thanks to explicit attribute locations in GL 3.3. Buffer objects for vertex data are really just 3 extra function calls (and you can explain them as being nothing more than a GPU malloc. Which they are). So that takes care of lesson 1, which is the hardest part of shader-based GL.

Once you get lesson 1 down, you can introduce each new concept as needed: uniforms, multiple attributes (along with vertex formats), etc.

You can learn step-by-step with modern GL effectively. It will certainly be a front-loaded experience to an extent, one that requires that you gain an understanding of what’s going on. As evidence is the partially complete book about learning modern OpenGL in my signature. It’s up to you as to whether it does the job I say can be done.

Of course, your main problem is that you lack a computer with GL 3.x capabilities. So… that makes it kind of a moot point for you.

Thank you for giving your time on replying in this thread and explaining the whys. Reading your book introductory part I’ve got a sense that I would be able to finally understand this tutorial http://blogs.msdn.com/b/davrous/archive/2013/06/13/tutorial-series-learning-how-to-write-a-3d-soft-engine-from-scratch-in-c-typescript-or-javascript.aspx of which I was fiddling with for quite some time now and getting myself stucked at the device object. I’ve watched the video about the tutorial(only the 1st video as the other videos seem to be only talking about features and not on how 3d graphics engines work) and have been introduced with the maths, process of how things go with 3d graphics engines, like normals as you have said, though the video wasn’t really that helpful to me, it gave me the basic of the basic in understanding the process going on under the codes. That said are there functions in OpenGL 2/2.1 thats similar to modern OpenGL I could use to start? I have read from some site that OpenGL 2(Old) have the functions modern OpenGL have. The transition from old to modern explained that OpenGL 3 was simply OpenGL 2 only striped off of the immediate ready to be applied functions plus the new features. [STRIKE]If I were to study OpenGL 3 guidelines/tutorials/lessons will I be able to confidently apply it with OpenGL 2? I might sound like I’m actually thinking of compatibility issues here for future developments but again, I lack the hardware, even so maybe this is a good opportunity to expose me on how developers have their hard time figuring how to manage old hardware while still giving great technology.[/STRIKE] Never mind my last question… I realized that I might just hit the new features not available from the old OpenGL in the process… Another dead end.

Just learn 2.1 then.

Get a copy of OpenGL Superbible 4, it covers OpenGL 2.1 and is really not a bad book. It also introduces Shaders so it’s not entirely outdated.

After reading that book i’m pretty sure you’ll be able to learn a newer version really fast once you get a better computer.

In practice OpenGL 3.0 added very little new functionality over 2.1, because a lot of it was already available as common extensions to 2.1 (framebuffer object, floating point textures etc.), and OpenGL 3.1 is tricky to work with, because basically the driver can decide if your app gets a core or compatibility profile.

So you can use 2.1 + common extensions in a way that is pretty close to OpenGL 3.0 or 3.1, if you avoid immediate mode, display lists, non-generic attributes. What you won’t get in 2.1 are interpolation qualifiers and attribute-less rendering and there are some syntactic differences (e.g. use of “in/out” instead of “varying”), but those are not that important that you can’t learn the more modern OpenGL concepts on 2.1. There are of course more things to learn in higher 3.x versions, like uniform buffer objects, but those can be learned later.

Just learn 2.1 then.

Get a copy of OpenGL Superbible 4, it covers OpenGL 2.1 and is really not a bad book. It also introduces Shaders so it’s not entirely outdated.

After reading that book i’m pretty sure you’ll be able to learn a newer version really fast once you get a better computer.

Thanks for replying. I was thinking of doing so but wanna get some insights if it should really be learned since I’m having trouble deciding myself. Old and modern OpenGL are really far apart and deciding on which to invest time is getting me stuck. The way I see the two versions is that they’re both different APIs and learning the other isn’t going to help you learn both, but I really don’t have a choice now since my hardware limits me so.

I’ve given the book a brief reading and according to it there are two methods of doing things in OpenGL which I have also read from gamdev’s stackexchange, the other currenly being deprecated and the other being currently used. He mentioned that immediate mode(deprecated) and retained mode(current) are both in OpenGL 2 as well as the fixed functionality and custom shaders. This leaves me the idea that it’s okay to learn the old way as long as you use it to learn the modern way preparing for when you get the proper tool, in my case a good hardware. What I have understood so far is that OpenGL 2(old OpenGL) would introduce me to how things are arranged while also having a way to use a part of how things is modernly done. Then while learning OpenGL 3(modern OpenGL) would teach me how each things works rather than just arranging functionalities provided by the previous version of the API. Does that sum it all up?

In practice OpenGL 3.0 added very little new functionality over 2.1, because a lot of it was already available as common extensions to 2.1 (framebuffer object, floating point textures etc.), and OpenGL 3.1 is tricky to work with, because basically the driver can decide if your app gets a core or compatibility profile.

So you can use 2.1 + common extensions in a way that is pretty close to OpenGL 3.0 or 3.1, if you avoid immediate mode, display lists, non-generic attributes. What you won’t get in 2.1 are interpolation qualifiers and attribute-less rendering and there are some syntactic differences (e.g. use of “in/out” instead of “varying”), but those are not that important that you can’t learn the more modern OpenGL concepts on 2.1. There are of course more things to learn in higher 3.x versions, like uniform buffer objects, but those can be learned later.

Thanks for the reply. So basically since the update of OpenGL to 3 most things done is the removal of functions and introduction of few new features? I guess I’ll go learn OpenGL 2 while finding my way on doing it the modern way afterwards.

Programming in OpenGL 2.1 does not necessarily mean you are using the “old way”. As others said, you have chance to learn the “modern way” with OpenGL 2.1 also. Just ensure the following:

  1. use VBO for all drawing,

  2. use shaders (the vertex and fragment shaders are still the main working horses),

  3. use glVertexAttribPointer() to specify attributes.

If your drivers also support texture arrays, you have nothing to worry about. BTW, what GPU are you dealing with?
I’m recommending you a book - OpenGL ES 2.0 Programming Guide. Although it is not for OpenGL, but for OpenGL ES, I think it is quite useful.

Yes. The legacy API isn’t going away; there’s too much code which relies upon it.

Not only that, but the compatibility profile for each version includes the newest features added in that version as well as all of the legacy API. So you can use features which were added in 4.5 alongside the fixed-function pipeline in the same program.

However: not all implementations will offer the compatibility profile for the latest version. E.g. Apple’s plan for OSX is that you’ll be able to get a 4.x core profile (assuming the GPU supports it) or a 3.x compatibility profile, but not a 4.x compatibility profile.

Programming in OpenGL 2.1 does not necessarily mean you are using the “old way”. As others said, you have chance to learn the “modern way” with OpenGL 2.1 also. Just ensure the following:

  1. use VBO for all drawing,
  1. use shaders (the vertex and fragment shaders are still the main working horses),
  1. use glVertexAttribPointer() to specify attributes.

If your drivers also support texture arrays, you have nothing to worry about. BTW, what GPU are you dealing with?
I’m recommending you a book - OpenGL ES 2.0 Programming Guide. Although it is not for OpenGL, but for OpenGL ES, I think it is quite useful.

Thank you for replying, these details gaved me good idea where to start, I’ll keep this in mind. About the Graphics Card… I know you might find this relatively disturbing, or if not almost probably idiotic but the card is a Mobile Intel GMA 4500MHD integrated graphics card(Mobile Intel 4 Series Express Chipset Family). I wanted to learn with this laptop before providing valid proof that an upgrade is necessary. About OpenGL ES 2.0 I’ve read about how it’s current situation is actually similar comparing it with OpenGL 2 and 3. I’ve actually been thinking of resorting to this if OpenGL wouldn’t go right for me, but then thought of its current community support for learning.

In the book you recommended I did a quick read and found this one quite intriguing, “The programmable pipeline allows applications to implement the fixed function pipeline using shaders, so there is really no compelling reason to be backward compatible with OpenGL ES 1.x.” this is the same thing as using the GL_ARB_compatibility extension for the standard OpenGL right? I mean the idea.

Yes. The legacy API isn’t going away; there’s too much code which relies upon it.

Not only that, but the compatibility profile for each version includes the newest features added in that version as well as all of the legacy API. So you can use features which were added in 4.5 alongside the fixed-function pipeline in the same program.

However: not all implementations will offer the compatibility profile for the latest version. E.g. Apple’s plan for OSX is that you’ll be able to get a 4.x core profile (assuming the GPU supports it) or a 3.x compatibility profile, but not a 4.x compatibility profile.

Thank you for the reply, it was good to know these details seeing my current situation. About Apple’s plan though, does that mean they’re starting on dropping legacy api? I’ve read that their OS was behind OpenGL technology before, wouldn’t that make them take double the consequences since I presume a lot of programs under their OS have been made basing on OpenGL 2 then?

Yeah, I probably should have made that clearer: shader-based OpenGL doesn’t necessarily require 3.x. What 3.x changes, from a user perspective, are a lot of “spelling” issues. In 2.1, you say “attribute” and “varying”, while 3.x is “in” and “out”. And you don’t have gl_FragData for your fragment shader outputs.

These differences however are pretty easy to pick up, so I wouldn’t consider the old way to be a hindrance.

Well no, it’s kinda the opposite. What they’re saying is that ES 2.0 has no backwards compatibility with ES 1.1. By contrast, while 3.1and above have optional backwards compatibility with 3.0 and below.

It’s highly unlikely that Apple will drop 2.1 support. But they’re probably not going to add compatibility support for higher versions.

On your second point, MacOSX is behind on OpenGL technology, but not that far behind. You have to remember: OpenGL 3.0 was in 2008, over 6 years ago. 3.2 was in 2009. MacOSX today only supports OpenGL up to 4.1 (core profile). So they’re still supporting pretty modern versions of OpenGL; just not the latest and greatest.

And to be honest, I’d guess the blame lies more with Intel than Apple. MacOSX takes a lowest-common-demoninator approach with OpenGL: it exposes only whatever is supported by the lowest level of hardware available. Intel’s D3D-11-class HD chips only have 4.1 driver support, so that’s what MacOSX exposes.

And the only reason Intel doesn’t support higher versions is out of laziness. Their hardware can do it (since D3D11 requires it); they just don’t want to do the actual work.

Yeah, I probably should have made that clearer: shader-based OpenGL doesn’t necessarily require 3.x. What 3.x changes, from a user perspective, are a lot of “spelling” issues. In 2.1, you say “attribute” and “varying”, while 3.x is “in” and “out”. And you don’t have gl_FragData for your fragment shader outputs.

These differences however are pretty easy to pick up, so I wouldn’t consider the old way to be a hindrance.

I guess I’d go ahead and learn OpenGL 2.1 using retained mode and custom shaders, Thanks alot to all those who helped.

Well no, it’s kinda the opposite. What they’re saying is that ES 2.0 has no backwards compatibility with ES 1.1. By contrast, while 3.1and above have optional backwards compatibility with 3.0 and below.

So all devices supporting and alongside OpenGL ES 1.1 was dropped then…

It’s highly unlikely that Apple will drop 2.1 support. But they’re probably not going to add compatibility support for higher versions.

On your second point, MacOSX is behind on OpenGL technology, but not that far behind. You have to remember: OpenGL 3.0 was in 2008, over 6 years ago. 3.2 was in 2009. MacOSX today only supports OpenGL up to 4.1 (core profile). So they’re still supporting pretty modern versions of OpenGL; just not the latest and greatest.

And to be honest, I’d guess the blame lies more with Intel than Apple. MacOSX takes a lowest-common-demoninator approach with OpenGL: it exposes only whatever is supported by the lowest level of hardware available. Intel’s D3D-11-class HD chips only have 4.1 driver support, so that’s what MacOSX exposes.

And the only reason Intel doesn’t support higher versions is out of laziness. Their hardware can do it (since D3D11 requires it); they just don’t want to do the actual work.

So compatibility profile ends with 3.x in Apple. About Intel and Apple, I guess I could see your point there, I’ve seen user complaints about OpenGL with their Intel product and never got it fixed or got their issues solved.

So all devices supporting and alongside OpenGL ES 1.1 was dropped then…

No. You’re confusing “hardware” or “device” with “API”.

Hardware can support numerous APIs. Most mobile platforms that come with ES 2.0 or greater also support ES 1.1 for legacy applications. So the devices have backwards compatibility. You can write an application that uses ES 1.1, and it’ll run on devices that use ES 1.1.

But there is no API backwards compatibility. You can’t create an ES 2.0 context and try to shove ES 1.1 code at it. Whereas you can create an ES 3.0 context and shove ES 2.0 code at it, and it’ll work just fine. ES 3.0 is backwards compatible with ES 2.0, but 2.0 is not compatible with 1.1.

Not really. With ARB_compatibility or a compatibility profile, the driver still implements the legacy OpenGL functions, so you can still run OpenGL 1.x programs unmodified with the newest hardware (the legacy context-creation functions tend to return a compatibility context by default; programs which don’t need compatibility have to state that explicitly).

With OpenGL ES 2, you have to figure out what the legacy functions were doing and write equivalent shader code, then change the client code accordingly (e.g. setting uniforms rather than fixed-function state, etc).

No. They’re just going to force an either-or choice between the latest features (4.x. core profile) and compatibility (3.x compatibility profile), rather than allowing a program to have both at the same time (4.x compatibility profile).

There isn’t really much of a disadvantage to doing so. Software which makes use of the most recent features tends not to use legacy API functions. The modern approach is more efficient; and in large programs, it’s either simpler.or at least not significantly more complex.

The only real benefit of 4.x compatibility profile from the developer’s perspective is that it might allow some of the new features to “tacked on” to legacy software. But only a small proportion of the new functionality would be usable in that way. Many of the new features are only useful if you’re using shaders, so code using the fixed-function pipeline would have to be re-written to use shaders before using those features was even an option.

Actually, Apple doesn’t support 3.2 compatibility. The highest version they support that has the old stuff is 2.1. So the choices are 4.1 core, 3.2/3 core, or 2.1.

Yes, and unfortunately some older Intel hardware on OSX advertise GL3.3 support, even though it relies on Apple’s software fallbacks to get there. The HD3000 and 2000 don’t do geometry shaders, for example, and GL drops to software rendering to handle them. It’s painfully slow enough (and sometimes incorrectly rendered) that we’ve blacklisted those parts on OSX.

No. You’re confusing “hardware” or “device” with “API”.

Hardware can support numerous APIs. Most mobile platforms that come with ES 2.0 or greater also support ES 1.1 for legacy applications. So the devices have backwards compatibility. You can write an application that uses ES 1.1, and it’ll run on devices that use ES 1.1.

But there is no API backwards compatibility. You can’t create an ES 2.0 context and try to shove ES 1.1 code at it. Whereas you can create an ES 3.0 context and shove ES 2.0 code at it, and it’ll work just fine. ES 3.0 is backwards compatible with ES 2.0, but 2.0 is not compatible with 1.1.

Indeed I was, thanks for this clarification, I understand better now. I see now how different companies tries it with different approach but with the same general idea.

No. They’re just going to force an either-or choice between the latest features (4.x. core profile) and compatibility (3.x compatibility profile), rather than allowing a program to have both at the same time (4.x compatibility profile).

There isn’t really much of a disadvantage to doing so. Software which makes use of the most recent features tends not to use legacy API functions. The modern approach is more efficient; and in large programs, it’s either simpler.or at least not significantly more complex.

The only real benefit of 4.x compatibility profile from the developer’s perspective is that it might allow some of the new features to “tacked on” to legacy software. But only a small proportion of the new functionality would be usable in that way. Many of the new features are only useful if you’re using shaders, so code using the fixed-function pipeline would have to be re-written to use shaders before using those features was even an option.

I see, good to know for both consumers and developers, advancing technology without crippling already existing ones.

Yes, and unfortunately some older Intel hardware on OSX advertise GL3.3 support, even though it relies on Apple’s software fallbacks to get there. The HD3000 and 2000 don’t do geometry shaders, for example, and GL drops to software rendering to handle them. It’s painfully slow enough (and sometimes incorrectly rendered) that we’ve blacklisted those parts on OSX.

And here I was thinking CPUs with HD 2000/3000 would be a good substitute.