OpenGL state

I’m new to this forum and relatively new to OpenGL.
I have to deal a bit more with OpenGL now and I’m wondering what the best accompanying handbook or literature would be besides looking up documentation online.

I have an old IRIS User’s Guide covering the basics of graphics programming (it’s copyright Silicon Graphics, 1986 :). I did some programming at that time on a Control Data Cyber 910 workstation (that was a SGI under the hood).

It’s very instructive, showing the different viewing transformations, but the API doesn’t start with gl.

calls (to name a few) are
ginit()
clear()
doublebuffer()
color()
ortho()
move()
lookat()
draw()

There are things like geometry pipeline feedback etc.

Anyway, probably not the best guide to start with :slight_smile:

What is the future of OpenGL? Should I write my code that it runs on DirectX and OpenGL?

What is the current version or better, what OpenGL does the machine I’m developing on, support? It’s a Dell Inspiron 9400 with a Nvidia GeForce Go 7900.

Would that still be appropriate?

Thanks for your patience.

Christoph

You card support OpenGL 2.1 according to this :
http://en.wikipedia.org/wiki/Comparison_…_7xxx.29_series

This is the version I would advise to begin with anyway, you will find lots of tutorials and resources. However you have to understand that a lot of “fixed” features have been deprecated in recent GL versions : still useful to understand but no need to take too much time learning it.

Most recent OpenGL version are 3.3 and 4.1, depending on the hardware.

This page describe how to start programming, with a list of tutorials at the bottom :
http://www.opengl.org/wiki/Getting_started

This wiki has a warning banner on pages talking about deprecated features.

According to the table at http://en.wikipedia.org/wiki/Comparison_…_7xxx.29_series we’re talking about Direct3D 9.0c or OpenGL 2.1 support, a little long in the tooth for sure but by no means shabby. You won’t be able to write modern OpenGL (3.x, 4.x) programs for it, but if you’re running Windows and have D3D11 you should be able to target a D3D9 feature level using the D3D11 API. Either way you’ll still get vertex buffers and shaders, which you should be looking to learn.

These days there’s nothing between D3D or OpenGL in terms of ease of use and features, so your decision comes down to whether portability is important for you and which coding style you feel most comfortable with.

Avoid versions of D3D prior to 8 like the plague - all the horror stories are true. D3D 8 and 9 are almost identical, with the major difference being shader support (HLSL in 9 only). At this level D3D is a very clean and elegant API with it’s big dirty nasty secret being that it’s actually considerably easier to use than OpenGL for a lot of key functionality (particularly vertex buffers).

OpenGL definitely remains easier to get started with, and is untouchable by any version of D3D in this regard. Sticking to pre-1.5 functionality (i.e. glBegin/glEnd, no shaders) is what I’d recommend for your first few programs. You’ll be exposed to a lot of new concepts, and although the drawing paradigm may be “wrong” for a modern program, it definitely helps with the learning curve. Setting up extensions can be a chore but eventually you’ll write your own library (or use one of the free ones) to do that, and will be able to reuse that code all of the time.

Neither API is going to go away any time soon, so you can safely make your choice and be comfortable that what you learn will be valid for some time yet. I’d personally recommend learning both in the longer term, as each has a subtly different perspective on things that can enrich your experience and enhance your knowledge of the other.

The canonical OpenGL dead-tree resources are the Blue Book, the Red Book and (for shaders) the Orange Book. The Superbible also gives a good in-depth overview of everything.

For D3D the tutorials and samples in the SDK are a good initial starting point.

The 7 series nVidia are capable of OpenGL 2.1 and running fairly decent vertex and Pixels shaders.
Like all OpenGL programs, you’ll have to enumerate through all supported extensions so that you can decide which of the more recent (modern) features you’d like to play with and support.
Some people in these forums keep suggesting that to learn OpenGL you should concentrate on the ‘modern’ OpenGL i.e. 3.3 / 4.0 and forget about OpenGl 2.1 and the compatability profile of GL 3.0+.

I disagree.

First of all, you have older hardware. This means you need to start with a version of OpenGL which is going to run on that h/w. Secondly, depending upon your drivers for the h/w you have, you may still be able to get much of the OpenGL 3+ functionality via the Extensions. So the fact that you have requested an OpenGL 2.x context does not matter! My final point is that the ‘compatability’ profile of OpenGL 3.x is not going anywhere! Who is to say what the future holds. nVidia have already stated that the compatability profile is here to stay since this supports potentially millions of already written applications. Who knows, next year the ARB may retire the ‘core profile’ as we know it today, and replace it with a new profile which has even more depreciated functions. Therefore writing an application for ‘core’ is really the same as writing an application for ‘compatability’ - you are just writing an application designed to work for a target audience.
My advise - learn OpenGL 2.1 + extensions. Love the convienience of all that extra functionality (removed from core) and make your life a whole lot easier. Your target audience will be larger too.
Anything you write within the next two years I guarantee you’ll endup throwing away in any case - as it always takes several rewrites to get a decent set of foundation classes.

First of all, you have older hardware. This means you need to start with a version of OpenGL which is going to run on that h/w.

Every day, older hardware gets, well, older. Already most GL 2.1 hardware is no longer supported in ATI/NVIDIA’s current crop of drivers. It won’t be long before only GL 3.x is the only version of OpenGL provided by up-to-date drivers.

Secondly, depending upon your drivers for the h/w you have, you may still be able to get much of the OpenGL 3+ functionality via the Extensions.

Define “much of the OpenGL 3+ functionality”. If you mean “Vertex array objects and ARB_framebuffer_object” then yes. If you mean the actual hardware features of 3.x (UBOs, transform feedback, geometry shaders, etc), then no.

Love the convienience of all that extra functionality (removed from core) and make your life a whole lot easier.

Except it won’t make your life easier. Not in the long-term. Oh, you might be able to make some pretty pictures. But the moment you start wanting to do more, you run into real problems.

See, the problem with learning OpenGL with all of the “convienience” features turned on is that you never really learn how these features actually work. You don’t understand what this stuff is really all about. And even if you do, all it does is limit your thinking.

Using glTexEnv encourages you to think of textures as being synonymous with images. This stunts your creativity and makes it difficult for you to use textures for the innumerable non-image related tasks later. It encourages you to look at the texture environment as “the way things work” rather than one possible way to do fragment processing out of many others.

It’s possible to get through learning “convienience” OpenGL without knowing what you’re doing. That is not possible if you learn OpenGL with shaders from the beginning. It is usually better to learn things the right way now than to have to unlearn bad habits later.

So what if the GL version is higher than GL 2.1? Great! Use the latest available GL version. The point is though, older h/w does not support the h/w features of GL 3+, so his drivers will be limited to creating a GL 2.1 context. As long as the h/w and drivers exist, there is nothing wrong with this. And guess what, when he moves his development environment onto new h/w (which does support GL 3/4) his application will still work just as he last left it on the old h/w. Where’s the harm in that? When he’s ready, he can request a GL 3 context and add extra functionality. Rome was not built in a day you know :wink:

Define “much of the OpenGL 3+ functionality”. If you mean “Vertex array objects and ARB_framebuffer_object” then yes. If you mean the actual hardware features of 3.x (UBOs, transform feedback, geometry shaders, etc), then no.

Frame buffers Objects are the key part of GL 3 in my opinion. Transform feedback and geometry shaders are not for everyone. Uniform buffer objects , Texture buffer objects…all nice to have but KISS (Keep It Short and Simple to start with).

Except it won’t make your life easier. Not in the long-term. Oh, you might be able to make some pretty pictures. But the moment you start wanting to do more, you run into real problems.

…except he’s starting to learn OpenGL. Of course he’s going to run into problems…and hopefully solve them. Thats the point of learning. Armed with that knowledge and understanding of how the GL works (and is documented) he can find out new ways to solve old problems with GL 3+ and a shader based approch for everything.

See, the problem with learning OpenGL with all of the “convienience” features turned on is that you never really learn how these features actually work. You don’t understand what this stuff is really all about. And even if you do, all it does is limit your thinking.

You still have to start somewhere…
Yes, some things are limiting…some are confusing… nothing is perfect.

It is usually better to learn things the right way now than to have to unlearn bad habits later.

Learning to program with a context other than GL 3 is not wrong. Neither is using the fixed functions. There is no right or wrong way. Don’t need to unlearn anything either. At least he can get started on his Dell laptop…

While I’d agree with this in principle, in practice I’d be of the opinion that the right way is just too steep a learning curve. Being thrown in at the deep end with shaders, VBOs, etc can be very daunting, and there is a risk that one could experience some significant information overload.

I’d agree with the recommendation to learn the “wrong way” first, but support it with the knowledge (a) that it’s wrong, and (b) (importantly) of why it’s wrong.

The priority here is to get that first triangle on screen, light it, colour it, put a texture on it and make it spin. Once the basic knowledge is obtained, then it’s time to start looking at the right way before bad habits become ingrained.

Many thanks to all the valuable recommendations and tips.
I really appreciate and I’m pleasantly surprised that I haven’t been bashed for my little bit naive appearing entry questions.

My application will be mechanical engineering rather than games. 3D modelling will be my main area of interest.

So I’m glad that I can still use my hardware for the next 3/4 of a year and then possibly jump to a new platform.

What would be the OpenGL capable top model of a notebook these days?


Christoph

Well, let me chime in on this.

I don’t believe in laptops/notebooks, ever since one of mine died due to accumulated dust and overheating. Nowadays I follow the following algorithm:

  • buy a small case (there are very small ones available)
  • buy a PC bundle kit (very cheap) with a good GPU/CPU, watch the format of the motherboard
  • assemble a PC yourself in your small case.

Of course, it is not as convenient as a notebook, but it works and it’s usually half the price of a comparable notebook. You can carry the case around, get a portable keyboard and monitor…

Top of the range laptops can run anything you want, and you can even get them with SLI configurations. They can also create OpenGL 3.x or 4.x contexts.
I love my laptop - great way to spend the evening with the ‘other half’ (coding away whilst she natters on about this and that!). Can even take it with you when you have to make those boring in-law trips at the weekend :wink:

Top of the range has top price. If you’re short on $ and you don’t have a better half (this also excludes the in-laws), then my suggestion, I think, is more optimal.

Also, you’ll be able to reuse some old components and upgrade only the GPU as it becomes available. The upgrading options are limited with most laptops + laptop components are often more expensive than desktop.

Desktops are completely untouchable for price/performance, true, but a decent laptop is also great for convenience, and once you get used to it’s advantages it’s difficult to go back. I find them very handy for travelling with, especially in situations where a supply of power may not always be available.

It’s mainly the portability I like about laptops. When I travel on business I leave the work laptop at home and take my dev machine instead! Those long evenings in hotels are sooo boring!
Agreed though, laptops are more expensive and can’t be upgraded. Still, if you have the cash and don’t mind replacing sooner than the equivalent desktop, they are the best option.

While I’d agree with this in principle, in practice I’d be of the opinion that the right way is just too steep a learning curve.

I don’t agree. Indeed, I’m writing a series demonstrating how one can learn graphics just fine with fully modern OpenGL.

The priority here is to get that first triangle on screen, light it, colour it, put a texture on it and make it spin.

Why is that “the priority?”

Thinking like that is exactly why most attempts to teach modern graphics fail. They’re too focused on getting pretty pictures, when instead they should be trying to teach the reader specific things.

Furthermore, what I like about a more methodical approach is that you can show people how to think differently. If you just “put a texture” on something, it gives the wrong impression about how to use a texture. But if you have taught the reader how lighting works, by showing off lights in a textureless environment, then you can later emphasize that the texture is simply the source of the diffuse color for lighting.

This not being one of them: :eek:

…holy cow! :smiley: That’s my Dell XPS…and that could be me in the picture (except much more manly of course!).

Alfonse, I think you’d make an exceptional teacher. In my capacity as a IT professional (used to teach Microsoft MCSE courses to blue chip employees for a decade) I know just how valuable insight can be and how much richer it can make your knowledge.
The trouble is that for the begginer it’s all too much information and they just want to see something, something visual to simulate the sense of achievement. With OpenGL programming from scratch, there is a steep barrier to get started,. I know I have been there (creating the GL Window, adding all the API interfaces to the Delphi language by hand, adding extensions, etc). Honestly you are exhausted by the time your ready to actually use OpenGL.
There are, in my option, two types of GL programmer. Those that know how to use OpenGL and thouse who understand OpenGL. I am the former, I started by getting that triangle to work and the addiction grew from there. So you are right, Alfonse, the understanding is not there but it grows with experience, a little at a time.
The trouble is though, you can not throw all that knowledge and understanding at someone who is a complete novice because it just goes way over their head - it’s mostly wasted. Only when they have a taste of how it works and have satisfied their own initial curiosity and desrires can the background information void be filled. And boy do they appreciate it then - but only when the individual is ready.
How do I know all this - because I used to teach beople face to face, day in, day out.You could tell those people who are ready to embark on the next level of knowledge discovery from those who were just starting out. There is, after all, only so much anyone take take.

I applaud you for trying to create a pure GL 3 way of doing things…this is good. However, it’s also more long winded, technically difficult and matematically heavy. Neither of these suits an absolute beginner - hence the suggestion to start with compatability profile which, at the very least, lowers some barriers to initial programming.

For starting I definitely recommend using GL2.1, here is why:

  1. Lets one faze from fixed function pipeline to shaders. The fixed function pipeline gives an easier way to start, especially with respect to co-ordinate transformations. By the way, there are situations where the fixed function pipeline is a better thing at times, see NVIDIA’s OpenGL Functionality starting at slide 97.

  2. GL3.x and for that matter 4.x are NOT on any of the Intel GPU’s, I would be quite surprised if we see GL3.x on Intel GPU’s within two years even.

  3. As of now, Mac OS-X is also GL 2.1. Even with a GL3 capable GPU, Mac OS-X is GL2.1 only (but with some extensions).

  4. GL2.1 + Framebuffer objects is a superset of OpenGL ES 2. There are “GLES2” emulation libraries for MS-Windows and Linux from both Imagination Technologies and ARM, the emulation libraries map to the system’s native GL.

Though, I freely admit, that using an API over 4 years old may not seem so sexy (and GL 2.0 is over 6 years old), but GL2.1 is an excellent starting point and you get greatest compatibility.

With OpenGL programming from scratch, there is a steep barrier to get started,. I know I have been there (creating the GL Window, adding all the API interfaces to the Delphi language by hand, adding extensions, etc). Honestly you are exhausted by the time your ready to actually use OpenGL.

Which is exactly why I wouldn’t (and don’t in my series) talk about any of that. If the purpose is to learn OpenGL and 3D graphics programming, then the focus needs to remain on that. Libraries that can hide the setup and utility stuff that is needed to make OpenGL work are paramount.

I applaud you for trying to create a pure GL 3 way of doing things…this is good. However, it’s also more long winded, technically difficult and matematically heavy.

I don’t agree, and my experiment with teaching GL 3.3 seems to suggest at least to some degree that it’s much simpler than you think. My earliest tutorials are fairly short. Once you explain the basics of shaders, it’s a very simple concept. Then you add on interpolation and a bit more about vertex attributes. Then you add on a little more, etc.

It’s really all about the order that you provide information in. Shaders really aren’t that complicated, especially when they’re simple passthrough shaders. Neither are buffer objects, particularly when dealing with a single vertex attribute and a single triangle. Once the foundation is laid, you can build on it with more and more complexity.

GL3.x and for that matter 4.x are NOT on any of the Intel GPU’s

Is OpenGL 2.1? I’m being at least semi-serious here: does Intel actually support OpenGL 2.1, with a relatively solid GL implementation? Because I was never under the impression that using OpenGL on an Intel GPU was anything less than a crapshoot.

GL2.1 + Framebuffer objects is a superset of OpenGL ES 2

While this is true, if I recall correctly, GLES 2.0 has no fixed-function pipeline. So I’m not sure how helpful this is.

I think Alf got it exactly right, if he upgraded the context creation to 4.1 his tut would be even better. Actually, the more contexts, the better.

BTW: I wonder what happens if you do GL too much, maybe there’s a GL leg or hand syndrome.