Newsletter

Ok, the spec is out, dust has settled. Today is sunday, I’ve had a good coffee and read the paper.

Now the only thing missing is a new OpenGL pipeline newsletter that sums up the recent changes to GL in a human readable form, explains the intentions, and lets us peek into the future of industry’s foundation for high performance graphics.

CatDog

Newsletter and a good cup of java can’t be beat.

In the meantime there’s a nice presentation here:
http://origin-developer.nvidia.com/object/nvision08-opengl.html

The ARB is still on vacation after their laborious 2 year work on 3.0. Or at least their 6 month corrective dance to get something out. It’s tiring in crunch mode!

Expect a Newsletter when the ARB grows a pair.

The working group is busy with the next spec revision which has been underway for some weeks now.

The working group is busy with the next spec revision

Yay! An informal status update!

Still, during the course of a year of OpenGL3.0-Longs Peak Edition, there managed to be 4 newsletters cranked out. Sure, a newsletter might take up someone’s valuable time but the information contained within (ok, it’s really a PR thing) is gold.

In this day and age of parallel processing I can’t see why someone can’t spin off a group to do it. How about the group that decided to continue to call what could have been 2.2, 3.0? Actually, scratch that… They maintained code-compatibility but broke hardware compatibility so bumping the major version number at least clarifies the situation a bit.

Speaking of parallel efforts (and harping on recent history), the ARB should have had the foresight to develop that version 2.2 (deprecations!) along side a forward looking API (LP). I would have whole-heartedly supported that. Just because LP was voted down last Oct-Dec-Jan doesn’t mean that the effort couldn’t continue to resolve issues for a re-assessment in the future. Now, I assume, a major API rework would have to start again from scratch.

Anyway, back on topic, the newsletter doesn’t have to be about upcoming unannounced API changes (or backpedaling). It can be about current 2.1 issues or neato things about 3.0 we’d be interested in. Hell, it could even publish community articles, or better, professional articles in the tone of whitepaper overviews (render/optimization/GP techniques).

I won’t bother to ask about the progress of the next spec revision, but how about progress on the newsletter? Is anyone actively working on that?

The thing is this.

We don’t need a newsletter. Newsletters were spiffy and had good formatting and everything. They took time to write. What we need is information. That can just be a text file. It doesn’t even have to be formatted like the old ARB minutes; just an info dump on what’s going on.

All it needs to answer is tell us what features is the ARB committed to delivering for the next spec. Though I suppose the big problem with that is that the ARB doesn’t seem to actually understand what the word “commitment” means, so it may not be terribly meaningful.

What we need is an API that’s not complete crap. Maybe we should all ask Microsoft to design a cross-platform API in the spirit of OpenGL. It wouldn’t be as good as we expected GL3 to be, but it would still be better than the current/future [censored], that we will get from the ARB.

Jan.

Maybe we should all ask Microsoft to design a cross-platform API in the spirit of OpenGL.

There is precious little about D3D that is intrinsically bound to a platform. The API itself is mostly quite platform independent.

There’s a little bit, like the window handling, but it would be possible to make a mostly-compatible Direct3D implementation on top of, say, Gallium3D.

You’d have to make different versions of functions like CreateDevice for different windowing systems, but then, that’s not really any different than OpenGL.

All it needs to answer is tell us what features is the ARB committed to delivering for the next spec.

I’m bored with all this ‘spec’ stuff. What’s entertaining in OpenGL world is how the ARB stumbles around. Take a bunch of good game developers (by reputation of their employers) and a bunch of big product developers (CAD, modelers, whatever) and fancy IHV developers, put them in a room and you get… 3.0. Assuming they had reasonable goals (an SE person might even dare to say ‘requirements’) and reasonable processes in place for dealing with issues that arise (who knows) then it’s almost comical they couldn’t bumble their way into Longs Peak.

Just toss me a newsletter so I can avoid ranting on this forum while I save up for a GL3.0-capable video card. I might have been happier had I been able to actually use 3.0.

Can we get those 2006-2007 ARB meeting minutes to get more of the why LP failed? If it went all the way back to the 3DLabs GL2.0 I’d buy the book if someone wrote about it!

Can we get those 2006-2007 ARB meeting minutes to get more of the why LP failed?

Barthold made a post that provides the official ARB position as to why it failed. I seriously doubt that you’ll be getting a different answer from official channels.

And I would point out that last month, the C++ Standardization Committee voted out a Committee Draft of the C++0x Specification. They had more issues and more contentious issues to deal with than the ARB has ever dealt with in its 15 year history. They only meet as a group 2-3 times a year, with most communication taking place through e-mail and so forth. Their voting members are spread out across the entire planet, and number more than 40, from dozens of different, competing organizations.

And yet they were able to compromise and pull it off. Why? Because they made a commitment. They made a deliberate decision to succeed rather than fail; they said that failure was not an option and they would vote out a spec come hell or high water.

For the ARB, failure is an option. Possibly the only option.

Well, I see no reason not to hold out hope for a newsletter, though I’d agree that it need not be all that fancy - just the skinny on current events and such… something to quell the natives :wink:

Barthold made a post that provides the official ARB position as to why it failed. I seriously doubt that you’ll be getting a different answer from official channels.

I would love to get the unofficial, leaked, behind the scenes gossip! Like I said, it would make a great book. Either as a historical perspective of the “great” competitor to D3D as well as the grand beginnings of cross-platform 3D APIs, or as a case study in high competition specification organizations. People love reading about failure as much as successes, and with GL we’ve got both.

something to quell the natives

Well, the did say they were going to do it. Quote: We will get back on a regular publication schedule.

Is it too much to even ask for more info on this “promise”? I realize that with the ARB, what they show in presentations is not, in fact, the facts…

why are you even bothering asking these questions? it’s completely pointless. They don’t give a flying fig what any of us wants. I don’t even understand why I keep revisiting this thread - I must be on the verge of a breakdown. Either that or I have too much time on my hands waiting for my glsl shaders to build on the same hardware/drivers they’ve built on for the last 1000 times.

Either that or I have too much time on my hands waiting for my glsl shaders to build on the same hardware/drivers they’ve built on for the last 1000 times.

I’m not trying to be an ass, but am honestly curious. Why don’t you use cg? From what little I’ve seen, it compiles shaders into different assembly routines depending on the hardware you tell it to target. Why doesn’t everybody use cg for that matter? Is it still slow to bind its generated assembly to a shader object, or is it lacking in some way or another?

You must compile your Cg shaders to GLSL anyway which adds an additional compilation time. Without that, you will get stuck with shaders 2.0 on ATI. There is no other option to my knowledge.

I’m working on a simple solution to the shader-cache and Cg making SM2.0 code for ATi:
For nVidia hardware, heavily use semantics (C0, ATTR0, etc). The game/engine keeps a cache-file database with keys being filename/compiler-args/file-date. Data is the cgc-compiled nV or ARB asm.
For ATi, shaders that cgc manages to compile into ARB asm shaders, also use the above method
For ATi shaders, that don’t compile into arb-asm, strip the semantics (fixing-up glsl code to conform to the standard), but remember the semantics. Set attrib-indices according to the semantics; make varyings match names according to semantics; get locations of uniforms and their expected C0,C1,… location if the code was arb-asm (where we put them with the semantic). Ultimately, no caching of code is done, only of layout/indices.

For all targets, send attributes and uniforms to your known index/offset-locations, as if it was nV/arb-asm code you use (upload all FS uniforms in bulk, then all VS uniforms in bulk). For nV and ARB-asm, those uniforms are sent via a single glProgramLocalParameters4fvEXT(). For real GLSL, the engine automatically uploads sections of the passed array/struct via glUniform**.

This way, you keep nV and SM2.0 users happy with the fastest shading-management (loading,binding,setting-args) possible, dev-times much faster, and only test your glsl syntaxis sometimes on an ATi card.

Doesn’t sound like a simple solution to me, at all.

why are you even bothering asking these questions?

For the same reason you keep finding yourself coming back to this forum: Entertainment!

Never hurts to ask and force someone on the ARB to respond. Better than having to wait ~8 months for a single word about progress (or lack there-of).

With Cg, you can compile to and load from “object” code. This can’t be true object code until somebody supports an OpenGL precompiled binary. (It could be a vendor extension to start.)

We support this precompiled mode, though we don’t typically need to enable it. Other asset loading dwarfs shader compile times.