Starting out with OpenGL 3/4

I’m very familiar with OpenGL 2.1, but now I am starting with OpenGL 3/4 and I have some questions.

  1. What are the hardware requirements of each? If I remember correctly, I think OpenGL 3 require at least an NVidia 9000 series GPU, and OpenGL 4 requires something like an NVidia 480?

  2. Are OpenGL 3.3 and 4.1 pretty much the same, except for some advanced features like tessellation? Can I use the same code to write the renderer for both, or is the command set signficantly different?

  3. Can you point me towards the best tutorials to set up an OpenGL 3/4 context? It looks like the methodology has changed quite a bit since OpenGL 2.1.

Thanks!

A few “getting started” code snippets :
http://www.opengl.org/wiki/Tutorials

What are the hardware requirements of each?

GL 3.x is equivalent to DirectX 10. So any card advertised as DX10 can run GL 3.x. GL 4.x is equivalent to DX11.

Are OpenGL 3.3 and 4.1 pretty much the same, except for some advanced features like tessellation?

GL 4.x is a functional superset of GL 3.1 and above. The only place where a higher GL version is not a functional superset of a lower version is the transition between GL 3.0 and GL 3.1, when functionality marked deprecated in 3.0 was removed from 3.1 core. If you ask for a compatibility context (as explained below), even this transition is still a superset of the previous version.

Can you point me towards the best tutorials to set up an OpenGL 3/4 context?

Here is how it is done on Windows.

How humbling. Thank you.

In NVidia-land, IIRC OpenGL 3.x is G80+ (i.e. GeForce 8xxx), whereas OpenGL 4.x is GF100+ (i.e. Fermi, GeForce GTX4xx).

  1. Are OpenGL 3.3 and 4.1 pretty much the same, except for some advanced features like tessellation? Can I use the same code to write the renderer for both, or is the command set signficantly different?

Pretty much the same. Yes, might be able to use the same renderer, depending on at what level you abstract things (i.e. what level your users use your tools). If they plug in at a high level, you definitely can. If they’re basically crunching their own raw GLSL shader code and GL calls, then that gets harder.

If you use the COMPATIBILITY profile, you can just ease into the new stuff and removal of the old stuff gradually as time permits. There aren’t many “cliffs” where you’ve all-the-sudden got to make lots of changes. GLSL 1.2->1.3 is one of those, but you don’t have to make all those changes at once. You can make the changes required gradually and then just flip the #version 130 switch when all the old built-in refs are gone.

If you haven’t already, it’d be worthwhile to write down “why” you are looking to add support for the latest GLs and what you hope to provide to your users. That should help decide your renderer design questions.

GLSL 1.2->1.3 is one of those, but you don’t have to make all those changes at once. You can make the changes required gradually and then just flip the #version 130 switch when all the old built-in refs are gone.

You don’t even have to do that. 1.30 is fully backwards compatible with 1.20. It’s version 1.40 that removes the deprecated stuff.

What’s the story with GLSL #include? I thought I heard something about this a year or so ago. The GLSL spec does not contain the term “#include” anywhere in it.

I know it is trivial to write your own preprocessor. The problem arises when you are analyzing error messages from the GPU, and it doesn’t match your shader files. I definitely don’t want to go through that again.

  1. Has some kind of include file mechanism been implemented?

  2. If not, is it possible to define your own file names and line numbers, the way you can with C++?:

#line 3 "main.frag"
vec4 v=vec4(0,0,0,1);
#line 4 "main.frag"
v.x += 1;
//Preprocessor adds the included file here
#line 1 "myinclude.frag"
v.y = 5;

(The second option would be my preference, actually.)

NVidia compiler error messages are traditionally crap, so you might implement the #include directive anyway. Dark mused in some previous post, that this is probably due to NVidia’s internal translation of GLSL into Cg.

I own an additional ATI, just for better error messages and other testing.

What’s the story with GLSL #include? I thought I heard something about this a year or so ago. The GLSL spec does not contain the term “#include” anywhere in it.

The story is that there is the non-core extension ARB_shading_language_include that provides this functionality. After 6 months, nobody has implemented it. Not even NVIDIA, who’s usually pretty good about that sort of thing. They’ve implemented 4.1 extensions, and yet this one still hasn’t been touched.

So either the extension itself has problems, or IHVs don’t think it’s important enough to bother implementing. At least, not until they get finished with everything else first.

True, if you don’t mind the compiler spewing reems and reems of “feature X has been deprecated” warnings back from your COMPILE_STATUS logs, obscuring potentially real errors.

Can’t say I agree with that. Usually pretty good in my experience. Only time I recall seeing confusing was due to a compiler bug.

Yeah, that is pretty amazing. Makes you wonder how it slid through the approval process if nobody likes it.

It’s pretty easy to implement yourself however, or something more restricted (to keep users from going nuts with this feature).