Do we really need OpenGL2?

Do we really need GL2 now? ARB and 3DLabs are beeng working in parallel. With new superbuffer extensions, and maybe new synchronization extension, with ARB_vertex)fragment)_program - all this new stuff IS actually OpenGL 2. Cg is becoming more and more popular. So is 3dlabs proposal actual now?
I guess ARB solution is not worther - and it works. I wasn’t able to test any 3dlabs extensions, because there are no drivers. But situation with ARB is just other. With hardware-independend extensions this solution looks very nice.
What do you think off this question?

I’ve posted this thread in beginners forum, but no one seemd to be interested

I’ve posted this thread in beginners forum, but no one seemd to be interested

50 minutes without replies and you come to the conclusion noone is interested?

Yeah! I am very ambitious!

The original implementation of OpenGL is older than 10 years. And that in a business where everything older than 3 years is usually totally outdated.
And OpenGL is not up-to-date either. All these extensions are made to keep it alive, but over a hundred vendor-specific extensions are not a clean way.
It´s the same with a computer. You can update a processor, extend the memory, by a new drive or gfx card, but one day you will decide that it is necessary to buy a completely new one.
I really look forward to 2.0, since it will have the same functions, but it will be much easier and cleaner to use it. And a lot of stuff will be vendor-independent, which isn´t at the moment (one of the simplest examples is v-sync).

Jan.

I think OpenGL 2.0 will be some set of extensions all folded into the standard, and hopefully, a clean-up of the context and window integration model.

It will probably look a bit like the 3dlabs proposal, and a little bit like something else. I think 3dlabs were very insightful to drive programmability at all levels, and high-level languages, at the early point they did, for example – that concept certainly won’t go away in whatever actually becomes OpenGL 2.0.

Wasn’t one of the points of GL2 to get rid of Microsoft’s ICD stuff? So MS doesn’t have control over the basic OpenGL driver in Windows.

Originally posted by Zengar:
Yeah! I am very ambitious!

Can I get that ambition supersized…with a coke?

I think you’d be able to appreciate GL2 if you delved into GL1 a little more. Working across platforms with GLX and WGL is annoying. It’s difficult scaling your app all the way from NV10 to NV30 and beyond with all the different extensions out there. And then scaling across different IHVs. The point of an API is to abstract the hardware in such a way that you don’t have to worry about which IHV’s card is in the user’s computer.

Working with textures, display lists, and pbuffers across contexts is difficult to get right and fast.

Shaders will get longer such that glslang will be much more useful. The driver needs to be able to decide how to break the shader down into passes. Like Tim Sweeney says here: http://www.beyond3d.com/reviews/ati/r350/index.php?p=fb
And others have stated the same opinion.

I was also for unity but over the time I came to realization that a good api should do both. Provide a unity across hw and give you an option to expose ihv hardware. Giving you the option is the key here. D3d takes the unity road but fails on the option part. Sometimes the functionality you get back from accessing hw directly outweights additional work. Take register combiners for example. They can be combined with fragment shaders but ARB gl can’t. It’s case by case for each dev. when it comes to writing ihv extension code.

GL2 is certainly needed, at least the High level shading language. Everything is so much better with a HLSL and you’re so much more productive. Once you’ve tried it, you don’t want to return. I’ve been working on both GL2 stuff and DX9 HLSL over the last month, and for me it’s so obvious that this is the future. While I prefer the GL2 ideas and glslang over DX9 HLSL both are way beyond what there previously was.

I doubt you’ll see something revolutionary in GL2 as you’ll be checking hw support anyways, even in DX you have to! And that means you’ll have to make (n+)-paths for n-vendors & new options will be moved to extensions anyways, as that’s one “+” of GL (you can use new extensions without waiting for new GL version). If you want to use ONLY matured options, then there is ALLMOST no problem, "no bump, no light, no doom3) otherwise you’ll have to say UPGRADE RIGHT NOW for TNT/RAGE/GF(1/2/3/4) people
And finally, if we are talking about GL2 I doubt even NV30/R350 will support it (take a look at docs -> GL1.4!). The only way out I can imagine, is PURELY programmable cards, so that EVERY new extension could be implemented in drivers without iron needs (maybe a bit slower ) Emu on Vcard, but not on CPU

I expect full support on R300/NV30 and so on, maybe even lower. Don’t forget that there’s a software fallback. The R300 and NV30 should be able to accelerate a most normal shaders that doesn’t do a lot of fancy stuff. Going to an HLSL is mainly not about being able to do more stuff, but being more productive.

I haven’t done a lot of coding with the program extensions, but I find that the assembly like syntax is no big deal once you get use to it and it’s unlikely that you’ll feel the productivity increase unless that’s what you do all day.

It’s more about look and feel IMO.

The sad part is, all non programmable GPU’s need to be trashed. It’s gone take a long time for the old cards to disappear off the face of the earth.

Let’s burn them!

I expect full support on R300/NV30 and so on, maybe even lower. Don’t forget that there’s a software fallback.

If they fallback to software, then the cards aren’t really supporting it. Vertex programs can get away with running on the CPU without horrible performance penalties (unless you use VAR/VAO/VBO, in which case you lose), but fragment programs will not be able to do so.

The R300 and NV30 should be able to accelerate a most normal shaders that doesn’t do a lot of fancy stuff.

And finally, if we are talking about GL2 I doubt even NV30/R350 will support it (take a look at docs -> GL1.4!).

I imagine that these will be able to handle most of the per-vertex stuff that you could do with the HLSL (given memory/size constraints). Their per-fragment capacity may be limitted, though. NV30’s got more instructions, but you may still run out of temporaries. If the R350’s f-buffer allows for unlimitted constants and registers, then it can handle pretty much anything you can throw at it.

Besides, the minimum glslang spec (in terms of registers and memory) is weaker than what nVidia and ATi already offer. You can’t pass per-vertex colors under glslang; you have to use registers for texture coordinates. You get higher precision, but most of the time, you don’t need it for just a per-vertex color. It takes up a texture coordinate slot too.

Originally posted by V-man:
Let’s burn them!

Originally posted by V-man:
[b]I haven’t done a lot of coding with the program extensions, but I find that the assembly like syntax is no big deal once you get use to it and it’s unlikely that you’ll feel the productivity increase unless that’s what you do all day.

It’s more about look and feel IMO.[/b]

It’s really that big deal. Trust me, I’ve been doing HLSL stuff in both DX9 and GL2 at work for the last 5 weeks, and it really does wonders for productivity. Today I did three effects in RenderMonkey, most of the time spent coding on shaders. Had I not used a HLSL I can garantuee that I would not have finished even one of those today. I would probably be half done with the first. HLSL’s are really that good.

Originally posted by Korval:
If they fallback to software, then the cards aren’t really supporting it. Vertex programs can get away with running on the CPU without horrible performance penalties (unless you use VAR/VAO/VBO, in which case you lose), but fragment programs will not be able to do so.

The point was that you don’t need to wait until you have everything in hardware before the API will be exposed and usable, you may just not have everything in hardware at first. Just having an HLSL, even without additional features, is well worth it alone. As such, users of R300 and NV30 may begin using glslang as soon as the spec is final and drivers released. You may have to wait longer though if you’re going to use something really fancy.

Originally posted by Korval:
You can’t pass per-vertex colors under glslang; you have to use registers for texture coordinates. You get higher precision, but most of the time, you don’t need it for just a per-vertex color. It takes up a texture coordinate slot too.

Per-vertex colors was certainly supported the last time I checked.