OpenGL 2.0 - Pure!

Are we gonna get the “pure” OGL 2 or not? Incase no, we really ought to. :frowning:

I feel we really need it because the API is very big, it has multiple ways to achieve the same thing and some stuff in the API is present just for backward compatibility. How will someone wishing to learn OpenGL find out the best way to do say render the triagles? Should it be display lists? vertex arrays? compiled? Or VBOs? Or maybe use the uber buffer? Or may be mix these? I feel that established IHVs do not want to lose their advantage of providing full backward compatibility. It would be tough for any new IHV to implement the full OpenGL drivers. Am I paranoid?

A pure OpenGL 2.0 specification must be put forward that discourages the use of old features. Any backward compatibility should be layered upon this API. Apps written using the pureGL can be ported easily across platforms - even to cellphones though some features might not be accelerated. Even on such limited platforms, it might soon be possible to support the whole of the pure OGL2 API. The API should also make the vertex, fragment programs abstract i.e. and you should be able to pipe the output to another program or (output to) a buffer. This kind of scheme would easily accomodate for new functionality like the topology processor or custom data flow between various types of shader units. The old Opengl state model will no longer reflect the underlying hardware with each new generation though it is very handy for quite a lot of basic rendering stuff. The old fixed function stuff should be layered upon the generalized programmable and memory access API.

We need a clean and pure OpenGL API.

Unfortunately it seems that the ARB has discarded the idea of a ‘pure’ spec, and OpenGL 2.0 will be yet another incremental update of the API. At this point I doubt we will ever see a ‘pure’ version of the API.

Technically the original proposal was to incrementalize the spec, add stuff, and in the end creating a pure version in the future. That could still be done, however, I have heard in a lot of places that they are not going to. It’s too bad too, because a lot of the things that were proposed for the GL2 spec were really nice (it seemed more generalized than current GL).

I don’t think it would be easy to layer GL2.0 pure for backwards compatibility. You might be able to do somethings, especially on PC chips, but I have trouble seeing how the flexibility inherent to GL2’s original design would go well with cellphones. In that case, that’s what OpenGL ES is for (which seems to have surprisingly high specs from what I’ve read).

Hm… GL2 for cellphones might be overkill right now.

Does anyone know what is the reason for the ARB choosing to drop the “GL2.0 pure” idea?

How can they NOT give us a simple elegant API which has flow control of data between programs, data accessing/management, and the programming language?

Originally posted by krychek:
Does anyone know what is the reason for the ARB choosing to drop the “GL2.0 pure” idea?
This is only my opinion but I believe that the idea of throwing away all the money which the biggest players had spent to build there OpenGL drivers and at the same time lowering the entry barriers to the OpenGL market would not have been appreciated by the biggest players.

That’s the best I can come up with too… and its sad if that’s the reason.

Check out the following slides about DX next and the new driver model.
Geometry Shader, Pre-Tessellation Vertex Shader, unified shader model with automatic loadbalancing (shouldn’t this be upto the hardware?) and somewhat general memory access.

http://download.microsoft.com/download/1…_WINHEC2004.ppt

http://download.microsoft.com/download/1…_WINHEC2004.ppt

According to the slides, new drivers for longhorn need to implement only one interface for directx. the pervious interfaces are supported by the os which layers it over the IHV driver. This was what I was suggesting. :stuck_out_tongue:
(bad) Slide extract - italics are nested points

  • Core API implementation is very thin[]Layers built on top, and individually switchable[/b] []Debug helper/validation layer[/li] [li]Just-in-time switch-to-software rasterizer for debugging shaders[/li] [li]Hiding resource limits (multi-pass?)[/li] [li]Multicast to hardware and ref for debugging[/li] [li]Record/playback of frames for easy bug repro[*]Layers can ship with SDK[/li] [li]Continuous improvements based on developer feedback

These would be powerful tools and would be very helpful for developing because the same tool is used over different hardware. Developing with OpenGL just wouldn’t be as easy unless ARB works together and comes up with such common tools and layered backward compatibility which can be used over all (nextgen) hardware.

It might be an interesting (and less contentious) experiment to try placing backward-compatibility APIs in “#ifndef GL_PURE” blocks for the 2.0 header. It shouldn’t affect binary compatibility, it would provide some real-world feedback on how usable a pure subset would be, and it could easily be sidestepped if developers found themselves needing the old APIs.

I haven’t given up on Pure yet. The latest meeting notes indicate that the ARB are already thinking about GL3 (for uberbuffers), by which time ISVs will have had much more experience with GLSlang and might be more receptive to a trimmed-down subset.

Yes, I hope they figure that one out too. It is somewhat interesting because it looks like some of the DX features are moving towards what some of the GL features are looking like (DX is getting arbitrary buffers), and they are getting high level only language.

They are also coming up with some stuff of their own reuseable stream output (sooo sweet), geometry processing, bitwise shader operations (for packing etc.), and more generic memory management (Does anyone know if GL has considered ANY of these things?).

The only main difference I see at this point, is probably that DX is a bit more of a straight forward interface in some cases (shaders and fewer options for drawing), and they are going insane integrating tools to make things easier. Of course, I don’t really know anything about information about GL performance tools and such (any pointers to resources?).