PDA

View Full Version : OpenGL 2.0 news



KRONOS
03-24-2004, 05:35 AM
NVIDIA developer site updated stuff concerning GDC and OpenGL, with stuff relating to OpenGL2 and GLSL.

It seams that no ubber buffer in GL2. :confused:

paulc
03-24-2004, 06:29 AM
Theres also the Melody tool - LOD generation and Normal map creation all in one. Looks quite handy, although it only imports 3ds files at the moment.

Corrail
03-24-2004, 07:06 AM
Originally posted by KRONOS:
It seams that no ubber buffer in GL2. :confused: Just read it... :(
This doesn't make sense. VBO and PBO are included and Superbuffer not?

nrg
03-24-2004, 08:31 AM
Originally posted by KRONOS:
NVIDIA developer site updated stuff concerning GDC and OpenGL, with stuff relating to OpenGL2 and GLSL.

It seams that no ubber buffer in GL2. :confused: Hmm.. what does that mean after all? GL2 will have PBO's (and VBO's), but not über-buffers.. I thought PBO's were part of über-buffers?

Somebody please explain..cass maybe? :)

Korval
03-24-2004, 10:08 AM
GL2 will have PBO's (and VBO's), but not über-buffers.. I thought PBO's were part of über-buffers?Effectively, PBO is a part of superbuffers, but only a small part.

Basically, what this means is that the ARB can't decide on big extensions (like superbuffers), so they're giving us little pieces of them instead. This only helps prove what has been so evident the past few years: the ARB isn't a very functional way to define graphics standards.

Won
03-26-2004, 07:10 AM
According to the 3Dlabs Shading Language seminar (good complete talk, but no real surprises), they'll be producing a spec at Siggraph 2004. Also, I expect to read some interesting stuff in the March ARB meeting. Apparently, they finished reviewing the minutes so it should be posted fairly soon.

Superbuffers (I don't think it is really called uberbuffers anymore) has many pieces. It is not reasonable to expect that the whole thing gets accepted at once with unanimous vendor adoption. We'll probably only get something that abstracts memory on the card so you'll get things like render to vertex array and render to slice of 3D texture and all the offscreen rendering buffers (accessible without context switch) you can shake a stick at. This doesn't mean that it'll necessarily provide fast, asynchronous pixel transfer like PBO.

The ARB is a great way to adopt big extensions. Would you rather a single vendor came up with something? Or have OpenGL just follow Direct3D's lead? Superbuffers has taken a long time because it is complicated and represents a big "architectural churn." The ARB is slow and careful because it doesn't want to paint itself into a corner with some half-baked extension that has the burden of support.

-Won

nystep
03-26-2004, 08:50 AM
Don't worry.. We will have uber buffers. They just meant uber buffers won't be integrated in the core API, but they'll still be avialable as ARB extension.

Korval
03-26-2004, 11:11 AM
The ARB is slow and careful because it doesn't want to paint itself into a corner with some half-baked extension that has the burden of support.Better to try and fail than to sit around and do nothing. If the extension turns out to be wrong-headed, it can always be replaced by another.

D3D may not have all the features implemented in the best possible way. But it does have them.

I like OpenGL's progress when nVidia would just create new extensions, but write the spec in such a way that others could implement them to expose the hardware appropriately. Look at point sprites or occlusion querries. Yes, the ARB eventually got their say on it, but ATi adopted them long beforehand. And the same goes for ATI's float texture extension (for NV40 hardware that can actually handle it).

This is how extensions should be created and adopted. Some vendor exposes hardware functionality, but writes the spec with an eye to someone else implementing it (or, without a propriatery nature to it). Then, they see how well that works, in terms of the quality of the extension (encapsulating and abstracting the hardware, etc). Eventually, a few others pick them up, and the ARB can promote the extension (perhaps with changes) to ARB status.

Something similar happened for VBO functionality. nVidia created VAR, but it wasn't that good of an abstraction. A functional extension, but there were better ways to abstract the concept. ATi made VAO, which was a much nicer abstraction, but required a whole new interface for binding vertex pointers. The ARB took the two of them, merged the good parts, and created VBO. In the meantime, people were inconvienienced by having two different API's for two platforms, but the functionality was there, which was the most important thing.

Superbuffers should have followed this path, rather than langushing for years behind closed doors.

Won
03-26-2004, 11:51 AM
Korval, this is true. I certainly would have preferred some intermediate EXT or IHV solution for SuperBuffers, but isn't this what we've been getting? We've gotten pieces like NV_RENDER_TO_TEXTURE_RECTANGLE extensions.

It's not as if a single vendor was sitting on this capability for a long time and got hung up by ARB politics. It is a great deal of work to implement superbuffers since it entails a huge refactoring of lots of OpenGL into separate storage and function axes. This can be a pretty difficult software burden considering the hardware does not necessarily look like this right now. Do GPUs deal with frame buffers, textures and vertex arrays uniformly? Probably not. If the driver author has to make them all appear to be the same thing then that is alot of work. And no IHV is going to do it if they think they will have to do it again.

Granted, in the long run something like superbuffers is likely to simplify things on both ends. However, the ARB is also concerned about making a smooth transition.

The ARB is much faster than it used to be. Also, they're also rewriting their by-laws to be more IP friendly, so I would expect it to become even more efficient. I manage to forgive them for any slowness on their part.

-Won

cass
03-26-2004, 11:57 AM
Just to clarify, these were results from a strawpoll of arb members.

PBO is a simple logical extension of VBO, and it
is available in the latest NVIDIA drivers.
(At least for registered developers...)

I don't know the rationale behind the poll
results for uberbuffers.

I actually expected the ARB meeting notes to be on opengl.org by now, but I guess the web site troubles have delayed that. Once they're up,
you will be able to read the details for yourself.

Thanks -
Cass

MZ
03-26-2004, 02:56 PM
Originally posted by KRONOS:
NVIDIA developer site updated stuff concerning GDC and OpenGL, with stuff relating to OpenGL2 and GLSL.So, GL 1.6 has got renamed to "GL 2.0" ?? :eek:

From examining the stuff they are going to include, the overviewed API revision looks just as much incremental as were 1.3, 1.4 and 1.5. The only common things between the original OpenGL 2.0 and the "OpenGL 2.0 Update" are: GLSL included in the core and the "2.0" name. The original GL 2.0 was quite a lot more than that.

This leads me to following guess:
The whole "GL 2.0 initiative" has essentially been killed, but since considerable hype occured in the past, they decided to reuse "2.0" moniker, in order to prevent the failure message "OGL 2.0 is dead" spread in the world...

(In anticipation of obvious reponse):
No, I don't think it is true that "all important stuff from the original OGL 2.0 is (or even will be) exposed in other forms anyway, so why care".

Korval
03-26-2004, 03:38 PM
The whole "GL 2.0 initiative" has essentially been killed, but since considerable hype occured in the past, they decided to reuse "2.0" moniker, in order to prevent the failure message "OGL 2.0 is dead" spread in the world...GL 2.0, in its original form, has effectively been dead for at least a good year, if not more. Just take a look at the ARB meeting minutes to see that it has, for all intents and purposes, gone nowhere.

Like I said earlier, the ARB has problems deciding on big stuff.


No, I don't think it is true that "all important stuff from the original OGL 2.0 is (or even will be) exposed in other forms anyway, so why care".I'm sorry you feel that way, but it is true. The most important of the 2.0 changes was glslang itself. Superbuffers and VBO represent the other functionality that was truly important. The rest would have been a nice API update, or other interesting features, but they are ultimately not necessary. Interesting perhaps, but not truly vital.

Zengar
03-27-2004, 12:30 AM
Originally posted by cass:

PBO is a simple logical extension of VBO, and it
is available in the latest NVIDIA drivers.
(At least for registered developers...)
But you never posted the specs and actually, I can't see it in the GLSL drivers... :(

crystall
03-27-2004, 04:44 AM
Originally posted by Korval:
I'm sorry you feel that way, but it is true. The most important of the 2.0 changes was glslang itself. Superbuffers and VBO represent the other functionality that was truly important. The rest would have been a nice API update, or other interesting features, but they are ultimately not necessary. Interesting perhaps, but not truly vital.I considered vital the whole OpenGL 2.0 "pure" thing. As it is the API is bloated, contains redundant and outdated functionality and has lost most of its simplicity.

davepermen
03-27-2004, 07:52 PM
yeah. the pure thing was a great thought. set a new standart, not add ontop of the old. thats what differes between gl and dx.

i would love to see a new opengl, having only what is today needed. that would simply mean, about no state changes anymore, a lot of the old pixel manipulation things dropped, only textured triangles with shaders in buffers left :D more or less.

some sort of glslim

cass
03-28-2004, 09:01 AM
Originally posted by Zengar:
But you never posted the specs and actually, I can't see it in the GLSL drivers... :( Zengar,

Sorry - they'll be available Real Soon Now, but there's really not much more to them than the powerpoint slides state. They're just like VBO, except for pixel transfers.

Thanks -
Cass

V-man
03-28-2004, 06:13 PM
Originally posted by davepermen:
i would love to see a new opengl, having only what is today needed. that would simply mean, about no state changes anymore, a lot of the old pixel manipulation things dropped, only textured triangles with shaders in buffers left :D more or less.

some sort of glslim#1 I'd say it is more important to have a good shading language, and finally GLSL is here.
GLSL should offer a lot of features, even if it's not in silicon.

#2 There are already a lot of interesting extensions, that aren't core and aren't offered by most vendors. Some of them are EXT, others are ARB!

#3 If experimental extensions are to be released, they should be available everywhere. Call it ARBX if you wish.

davepermen
03-29-2004, 05:17 AM
i'm not talkin about having all the new stuff, that is no big problem. i'm talking about dropping all the old stuff. how much pixel-storage thingies are still needed? how much use the index buffer, and other such things.

if you draw the line behind all you normally need with the features of the current gl (with all sort of extensions), you could strip off a lot of old things that got deprecated now.

cass
03-29-2004, 11:31 AM
I'm very skeptical about dropping support for anything in OpenGL core.

Having a stable core has been one of the things that has kept OpenGL viable and attractive.

Keeping support for the old stuff costs less and less with each passing year in terms of hardware and driver support, and applications written a decade ago still "just work".

If we feel that OpenGL really needs a radical re-design, then it probably deserves a new name too. Calling it OpenGL 2.0 implies that OpenGL 1.x should die, and I don't think the case has been made for that at all.

crystall
03-30-2004, 12:14 PM
Originally posted by cass:
I'm very skeptical about dropping support for anything in OpenGL core.

Having a stable core has been one of the things that has kept OpenGL viable and attractive.

Keeping support for the old stuff costs less and less with each passing year in terms of hardware and driver support, and applications written a decade ago still "just work".

If we feel that OpenGL really needs a radical re-design, then it probably deserves a new name too. Calling it OpenGL 2.0 implies that OpenGL 1.x should die, and I don't think the case has been made for that at all.That's what OpenGL 2.0 "pure" was for. The original propositions suggested a full-fledged version which supported both 1.x and 2.0 functionality and a slimmed down version with only 2.0 functionality. That was supposed to make the transition from legacy code to OpenGL 2.0 code smooth.

cass
03-30-2004, 12:36 PM
Originally posted by crystall:
That's what OpenGL 2.0 "pure" was for. The original propositions suggested a full-fledged version which supported both 1.x and 2.0 functionality and a slimmed down version with only 2.0 functionality. That was supposed to make the transition from legacy code to OpenGL 2.0 code smooth.Crystall,

I understood what the intent of OpenGL 2.0 "pure" was. I just think it was a bad idea.

There is an immense volume of OpenGL code out there, and it's unreasonable to expect it to all get transitioned. There's no need to rewrite perfectly good code either, just because someone thinks that some new API style is better.

I'm sure I sound like an old kodger, but this strategy has been very successful for OpenGL at a pretty minimal long-term cost.

Fully refactoring a solid, working API just doesn't help anything. What if people did this with the C stdlib?

Thanks -
Cass

Ostsol
03-30-2004, 06:20 PM
Originally posted by cass:
I'm sure I sound like an old kodger, but this strategy has been very successful for OpenGL at a pretty minimal long-term cost.

Fully refactoring a solid, working API just doesn't help anything. What if people did this with the C stdlib?

Thanks -
CassJust curious, but as an "old kodger" and OpenGL veteran, what do you think of the current state of the API? Do you think that the current (1.5) spec is at all cluttered or beginning to lose any of the elegence that OpenGL might have had?

Enbar
03-30-2004, 07:24 PM
From what I've seen the complexity difference between OpenGL and D3D drivers is immense. Some of that advantage is because Microsoft handles a layer between the IHV drivers and the applications. That Microsoft layer doesn't account for everything though because Apple handles a similar layer on the Macs, and the Macs OpenGL drivers, while simpler than the Win OpenGL drivers, are still much more complex than the D3D drivers I've seen. I believe the main reason for all this added complexity is OpenGL keeps all the legacy extensions around. I do see the advantage that Cass points out, but I believe eventually all this old baggage that is being carried will stifle progress. Also all this complexity adds barriers of entry for new IHVs.

To explain how much more complicated OpenGL is than D3D lets look at drawing a triangle. D3D has one way to do this. OpenGL has immediate mode, display lists, vertex arrays, compiled arrays, VBO, VAR, and VAO (did I miss any?).

Cass also states that supporting the old paths is fairly easy as we go forward. From my point of view (I might be wrong here) that doesn't seem to be the case. Either hardware needs to keep around legacy transistors to support the old fixed function states (like the nv30 evidently did) or code needs to be added to the driver to handle the old stuff (read fixed function shaders) (like the r300 evidently did). I can imagine programming new programmable hardware for things like all tex_env_combine and related extensions would not be a trivial amount of work.

I understand where keeping backward compatibility is important. I think the best solution would be to create a new "pure" version of OpenGL. Then a wrapper library could be added to that which would support all the old functionally. This would certainly introduce a performance hit for the legacy code, but I think it would lead to more stable core functionality. It would also lessen the barrier of entry for new drivers. I've noticed reviews on hardware sites of any card not from ATI or NVIDIA in the last 2 years has made note of how the OpenGL drivers were not as optimized and/or stable as the D3D drivers. Maybe creating a new "pure" core OpenGL spec and then just deriving old functionality from that would fix this.

That said I don't think it will happen. Such a big change just doesn't match the expectations I've created for the ARB over the past few years. Such a change also takes away some of the advantage NVIDIA and ATI have in the market today.

cass
03-30-2004, 07:35 PM
Originally posted by Ostsol:
Just curious, but as an "old kodger" and OpenGL veteran, what do you think of the current state of the API? Do you think that the current (1.5) spec is at all cluttered or beginning to lose any of the elegence that OpenGL might have had?Hi Ostol,

That's a good question. Though I think you can apply some objective measures, this is often largely a question of personal preference.

People like Kurt Akeley are professional designers. They come up with designs that last a long time. I didn't realize how much effort goes into this until I worked with Kurt. His process is very methodical and very thorough. He comes up with a really nice extension API that follows the conventions of the existing API, is simple but elegant, and minimal but sufficient. Then he challenges one of his base assumptions and reworks the design to see how it affects things.

I don't have the patience to be a designer like Kurt, but it's no accident that OpenGL is as elegant as it is. Even by subjective assessment, most developers think OpenGL is a clean API.

The OpenGL core, that is.

Here's where I have my own personal bias. When adding functionality to the API as a vendor extension, or as an EXT, I think there's a relatively low bar for aesthetics. Get something in there, do the best you can, and figure out how you really should have done it (or IF you really should have done it).

When something reaches ARB level, there should be hand-wringing about the style of the API and whether it is consistent with OpenGL convention, whether it is elegant, simple, minimal, intuitive.

As a simple example, I was against the GLhandle model for objects defined by the new GLSL extensions. The reason is not that the handle approach was inherently bad, it was that it is inconsistent with every other form of object in OpenGL and that makes it less intuitive to OpenGL developers.

So to get back to your question, I am very happy with the OpenGL core as it stands today. It has some quirks, but nothing hideous. It's got some unpleasant extension APIs - some of which came from NVIDIA - that served (and still serve) their purpose but will never be part of the core.

The thing I most worry about with OpenGL is that in our rush to add new features to the core, we don't go through the learning process of having vendor extensions and we don't take the experience from those extensions and design core revisions that are as tight and consistent and elegant as they could be. This is a big danger, I think.

If you throw a bunch of junk in the trunk that was untested - by time, I mean - or an otherwise bad idea (but you didn't know it at the time) it won't matter. OpenGL has been the "ansi C" of real-time graphics APIs because it had a solid core.

I feel like OpenGL can move forward indefinitely without casting off any functionality from OpenGL 1.0.

Much of the future of graphics programming will revolve around language design. For better or worse, I (personally) think that belongs in the software world, where software vendors can be responsive to the needs of software developers.

OpenGL has this funny - and unfortunate - model now, where if functionality is not provided via an OpenGL implementation (driver), then it's not provided at all. There's no OpenGL equivalent to D3DX, and we should really ask ourselves as OpenGL developers, why not?

I actually think the lack of a *standard* suite of OpenGL software utility libraries that work with any compliant implementation of OpenGL is the biggest issue facing OpenGL developers today.

Well - there's lots of rambling. These are my personal thoughts - not those of NVIDIA. They're not intended to offend anyone. And of course, you're free to disagree. It's a free internet. :)

Thanks -
Cass

davepermen
03-30-2004, 07:35 PM
Originally posted by cass:
I understood what the intent of OpenGL 2.0 "pure" was. I just think it was a bad idea.having an api wich now presents tons of different legacy ways on how to do something is a good idea then? it's not about dropping support. it's about dropping old PROGRAMMING INTERFACES.



There is an immense volume of OpenGL code out there, and it's unreasonable to expect it to all get transitioned. There's no need to rewrite perfectly good code either, just because someone thinks that some new API style is better.and this old code would not be hurt at all. because you could still have the old opengl, and you still have an opengl.dll wich can handle it all.



I'm sure I sound like an old kodger, but this strategy has been very successful for OpenGL at a pretty minimal long-term cost.sometimes a cleanup is a good thing. opengl is not perfect. i have to teach some people today that the array lock isn't useful anymore to get performance.. people find it because of legacy apps and try to use that stuff.



Fully refactoring a solid, working API just doesn't help anything. What if people did this with the C stdlib?
uhm. anyone still uses that old piece of **** ?

c++ shows a great example: provide the old one for backward compatibility, and provide a new one, in a different namespace, in different headers, cleaned and split from the old. and thats what i'm talking about.

#include <gl2>

=> you can use glslang, vbo, pbo, rt, and all the fancy new stuff. but you get dropped the old 8bit stuff, a lot of the old buffer stuff, simply the way the old worked (because the new stuff built on top of it is sometimes rather ugly simply to fit in in some way).

and if you still need it #include <gl/gl.h> and done.[/QB][/QUOTE]

there's currently nothing as slim, as straightforward, and as obvious on how to use, as the newest dx. and this will continue to do so.

you're quite retarted if you don't take the possibilities into account. yes, gl was great without big change over 10 years. but hw evolved never that fast, and gl never had to change that much of how it should be coded for.

Corrail
03-30-2004, 10:12 PM
I see the problem with OpenGL right now, backward compatibility and so on. But I think that if OpenGL will move on in this way it will ran into problems. There are about 350 different extension in the registry. A driver supports about 70 (is an estimation, correct me pls if that's wrong). 70 extensions... Where a lot of them effects each other. The main problem I think is that you have to take care of all these extensions and avoid collisions. If OpenGL/IVHs will move further on adding extensions to their drivers I think the whole thing would get really confusing.

I also think that the best way in solving this is to design a new API and use it parallel to the OpenGL 1.x as davepermen said. Defining a new core from the scratch surely won't be done in a few weeks. But I thinkt the ARB really should think about defining a new core.

cass
03-30-2004, 10:49 PM
Originally posted by Corrail:
I see the problem with OpenGL right now, backward compatibility and so on. But I think that if OpenGL will move on in this way it will ran into problems. There are about 350 different extension in the registry. A driver supports about 70 (is an estimation, correct me pls if that's wrong). 70 extensions... Where a lot of them effects each other. The main problem I think is that you have to take care of all these extensions and avoid collisions. If OpenGL/IVHs will move further on adding extensions to their drivers I think the whole thing would get really confusing.

I also think that the best way in solving this is to design a new API and use it parallel to the OpenGL 1.x as davepermen said. Defining a new core from the scratch surely won't be done in a few weeks. But I thinkt the ARB really should think about defining a new core.Confusing to whom? OpenGL is really not confusing. Perhaps you could argue that extension specs are, but that's more because they're written as mods to a base or extensions to an extension. The ideas they describe are usually quite simple.

The extensions I dislike most are the ones that are difficult to understand all by themselves. It is very rare that an extension has some strange interaction with the OpenGL core that I don't understand. The texture_env_* extensions were kindof that way.

Anyway, it's important to state the problem that you're trying to solve. If it's to make drivers simpler, I'd argue that that's not a problem. Most of the numerous extensions supported are like GL_EXT_texture_object, which is an extension that a program written against OpenGL 1.0 over 10 years ago would use.

Current NVIDIA drivers export over 100 extensions, but many of these are extensions that were rolled into the core so supporting them is free.

I guess the question is, what do you object to? That there are features in the API that you don't want to use (but don't cost anything being there)? Or do you want to change the API fundamentally? My feeling is that if it's the latter, then you should define a new API. Don't bother calling it OpenGL, because it won't be OpenGL anymore. A new API should fully start from scratch - and there's value to that. After all, that's what Microsoft did with Direct3D.

Thanks -
Cass

davepermen
03-31-2004, 02:25 AM
That there are features in the API that you don't want to use (but don't cost anything being there)?dunno, but having drivers that happen to take seconds to boot opengl the first time don't tell me it's for free and doesn't cost.

there is tons of legacy in opengl now, and this should get cleaned up. there is no need to call it a different way, because it would still be an open graphics library, and still try to use the best of opengl itself. but there is a lot of 'dead code', stuff that no one uses that way anymore.

but it's fine. you like it that way.. i can accept it. i don't really will understand it. there is no reason. i don't want to see your home, dude.. "i haven't had to clean it up the last 10 years. i don't need most of this anymore, but as long as it doesn't get into MY way, who cares?"

Corrail
03-31-2004, 02:30 AM
I think the problem won't be between extensions and the OpenGL core. I'm personally afraid of a driver which provides hundreds of extensions and I've to take care of inteaction between all these.

davepermen
03-31-2004, 02:47 AM
and i don't like to use devil to load images because all i want is a simple load() wich loads jpg, tga, png, and bmp, and a save(), and have to use an api wich has several tens of functions, and ways how to call them, and has dll's that in the end fill about half to one mb of my downloadable zipfile.

i prefer to get an api that evolves, and, from time to time, defines a milestone, at wich you get a full compilation that never gets touched again, and builds enough to provide support for all legacy apps. and then, you start new, with a clean api again, and learn what was really used, and in what way, the time before, and only provide that, and restart extending.

krychek
03-31-2004, 11:14 AM
A redesigned simpler and elegant API without the current lot of extensions would be easy for new programmers to adopt and much simpler to program. And also as pointed out, it would also be easier for IHVs to develop good drivers.

Shouldn't a new API be designed (if not now atleast after a few more generations) beacuse it looks like the API will have to expose lots of functionality like the primitive processors and if the new ATI slides are to be believed then you can have a vertex shader both before and after a primitive shader. How can such things be elegantly exposed with the current API?
Support for the current features should be in the form of a wrapper and use of the old API must be discouraged.

JD
03-31-2004, 12:06 PM
I think duplicated extensions should be taken out of ihvs gl spec docs to reduce confusion. Make a separate docs for the old duplicated extensions.

pkaler
03-31-2004, 12:44 PM
Originally posted by davepermen:

but it's fine. you like it that way.. i can accept it. i don't really will understand it. there is no reason. i don't want to see your home, dude.. "i haven't had to clean it up the last 10 years. i don't need most of this anymore, but as long as it doesn't get into MY way, who cares?"There is no need to be condescending.

There are functions in the API that aren't used in game dev world that are used in the CAD world extensively. Just because it is not useful to you, doesn't mean others don't find the functionality useful.

I could go for a more formal deprecation protocol for extensions. It could be as simple as an ARB vote followed by moving the extension to a deprecated list in the extension registry.

MZ
03-31-2004, 01:01 PM
A new API should fully start from scratch - and there's value to that. After all, that's what Microsoft did with Direct3D.It's not first time I see that argument (scare us with D3D example), which is not fair. You are comparing 2 extremes.

DirectX underwent utterly insane, cyclical process of rewriting from scratch nearly every year, involving sort of regular reinventing the wheel, which always ended up angular anyway.

OpenGL, on the other hand, has just been denyed the FIRST major update after 12 years of existence. After it has managed to gain some very real fat.

That's the difference: doing rewrite every year is something drastically different from doing it once per 10 years.

Sure, stdlib is even older than OGL. However, as your company likes to remind us occasionally, the speed of progress in graphics hardware is extraordinary... :cool:

I'm relatively young with OpenGL so I don't know, but it seems to me that the IrisGL->OpenGL could be considered as a major API update (aside of the 'openness' and '.org' stuff). It must have been _much_ more radical than OpenGL 2.0, which redesigns only small part of GL 1.x and retains full backward compatiblity with GL 1.x.

Another example: OpenGL ES. It really does drop some features from the core (again: GL 2.0 does not), but yet it retains the spirit of OpenGL, so there's no need to invent a new name for it.


As a simple example, I was against the GLhandle model for objects defined by the new GLSL extensions. The reason is not that the handle approach was inherently bad, it was that it is inconsistent with every other form of object in OpenGL and that makes it less intuitive to OpenGL developers.Introducing GLhandle and the new object stuff was making sense ONLY if you did it once for all types of objects in GL : shaders, textures, images, arrays, buffers, framebuffers, etc. OpenGL 2.0 was excellent opportunity to provide completely new stuff (like GLSL) and at the same time rethink old few things (which did need that), all in one consistent way.

Since ARB rejected it as a whole, I see introducing GLhandle as completely pointless.

And the last thing:

Regarding recent "awakening" of ARB and noticing the "slow atrophy of OpenGL developers/apps to DX". I'd like to say that killing OpenGl 2.0 was flushing down the toilet the thing that could be an opportunity to regain some of the lost ground.

Korval
03-31-2004, 04:44 PM
There's a pretty clear distinction between what ought to be in a scene graph and what ought to be elsewhere. Clearly, object-level concepts live outside of OpenGL. And various other things.

This is all well, good, and reasonable. However, OpenGL has evolved into a state where a good portion of its functionality can be implemented in itself.

Take classic vertex arrays, for example. I'm pretty sure that most drivers, on glDraw* calls, simply copy out the data into VBO buffers (or their low-level equivalent), doing appropriate data conversion as needed for fast performance. Well, we can do most of that. As such, it shouldn't be in OpenGL. The same goes for immediate mode.

Display Lists don't fit this description because of performance loss. A layer on top of OpenGL that reads and stores GL commands isn't anywhere near as fast as a hardware-based solution defined by the driver.

Given these two examples, one would say that a clean OpenGL is one that includes only features that cannot be implemented in itself, or those where doing so would represent a significant performance burden upon the functionality. This calls for an explicit dual-layered architecture, D3DX-style. That is, you have the low-level API which consists of clean functionality as defined above. Then, you have a glu-esque set of functions that implement the rest of the stuff as needed.

The current problem is that there is no explicit division at present. This creates confusion on the developer's part as to which API to use.


Regarding recent "awakening" of ARB and noticing the "slow atrophy of OpenGL developers/apps to DX". I'd like to say that killing OpenGl 2.0 was flushing down the toilet the thing that could be an opportunity to regain some of the lost ground.I wouldn't go that far. Having a clean API is nice, but a necessary first step to any major improvements is either ending the reliance on OpenGL32.dll (and therefore having core funcitonality without getting function pointers) or having an ARB-approved & maintained extension loading library freely avaliable for all purposes. Either of these would be a perfectly fine way to take a gigantic step forward towards an improved, cleaner OpenGL.

Even if the API never becomes clean (which, btw, only goes to prove that the ARB is ineffective at making big decisions, and can only handle incremental changes), there is still plenty of work that can be done to make OpenGL more compeditive with D3D. Having an ARB-approved and maintained SDK with tools and example code, for instance.

V-man
03-31-2004, 06:05 PM
Originally posted by davepermen:
dunno, but having drivers that happen to take seconds to boot opengl the first time don't tell me it's for free and doesn't cost.That's not a good measure of driver complexity, but let's assume gl drivers take twice as long to write when written from scratch.

Howcome driver developers are not here complaining about this? Howcome I haven't seen anything about this issue in the ARB meeting notes?

HANDLE vs ID :
=====================
The spec gives some hints about design dicisions. The case for app assigned ID versus GL assigned handle for the shading spec. Read it if you wish.

It's not consistent with old GL, but ever since I started using GL, I wondered why the dicision was made to allow users to assign any ID we like to display lists and textures.

To sum up my point, newer extensions should be as good as possible.

Jon Leech (oddhack)
04-02-2004, 02:05 PM
[QUOTE]Originally posted by V-man:
That's not a good measure of driver complexity, but let's assume gl drivers take twice as long to write when written from scratch.

Howcome driver developers are not here complaining about this? Howcome I haven't seen anything about this issue in the ARB meeting notes?

I intentionally keep the meeting notes at a pretty high level of abstraction. There is well-grounded concern that people will misinterpret offhand individual comments in the minutes to represent corporate positions (trade journals are particularly prone to this, if they can find any way to interpret some such comment as a "war" or "chaos" or other headline-worthy terms, they will - which is why we started dropping a disclaimer on the minutes recently). The alternative would be to pass the minutes through editing by every member's corporate legal department, by which time there would be nothing left, and it would take far longer than it does today to post them.

People in ARB discussions do often object to proposed new functionality on the grounds that it's hard to implement in their drivers. Unfortunately, they are rarely able to provide details, for concern it will reveal secrets about their drivers and/or hardware. So such comments have to be taken on faith.

HANDLE vs ID :
=====================
The spec gives some hints about design dicisions. The case for app assigned ID versus GL assigned handle for the shading spec. Read it if you wish.

It's not consistent with old GL, but ever since I started using GL, I wondered why the dicision was made to allow users to assign any ID we like to display lists and textures.

This is useful when capturing OpenGL command streams for replay in debuggers, among other things.

crystall
04-02-2004, 04:01 PM
Originally posted by Korval:
That is, you have the low-level API which consists of clean functionality as defined above. Then, you have a glu-esque set of functions that implement the rest of the stuff as needed.I am doing something like that with my software implementation. I've got a minimal core which contains only fundamental functionality. There's no transformations, only one way to send primitives, etc... The rest of the GL is implemented in a wrapper on top of that. Naturally the core must be flexible and completely programmable in order to support the rest.

Stephen_H
04-02-2004, 07:06 PM
After reading through all the above posts... I have just one thing to add. I would like to see some formal way for extensions to be depreciated, but not removed from the API.

Most new programmers learning the API get caught up in the pile of extensions and duplicate functionality which all appear to do similar things and aren't sure which to use. I'm not asking for functionality to be removed here, just a depreciation tag to be attached to various extensions/functionality with a reference to the newer replacement functionality. Eventually in maybe another 10 or so years, the functionality might get removed...

One of the areas I think OpenGL is seriously dragging behind DX is the availability of tools/utilities. DX comes with a lot of tools to help get you up and writing your game really quickly, while in OpenGL you spend a lot time writing them yourself or researching 3rd party tools. Things like loading images, models, model exporters, GL extension libs, shader/effects framework, ... I really wish there was some ARB sanctioned extension loader that everyone could use and standardize upon for example.

barthold
04-02-2004, 07:09 PM
Hi Cass,


Originally posted by cass:

I understood what the intent of OpenGL 2.0 "pure" was. I just think it was a bad idea.

There is an immense volume of OpenGL code out there, and it's unreasonable to expect it to all get transitioned. There's no need to rewrite perfectly good code either, just because someone thinks that some new API style is better.
Maybe I simply misunderstood your statement Cass, but here's what the idea behind "Pure" OpenGL 2.0 is.

The idea is to provide an API based on OpenGL 1.x that is allowed to break backwards compatibility. This would allow for a thorough cleanup, if desired. This pure API would be aimed at developers that are willing to start from scratch, for whatever reason. One of the obvious uses for such an API is the embedded market, where hardware and software (driver) complexity is a prime concern, and where 3D graphics intense applications just now start to emerge. You only get the chance to do such a cleanup maybe once in a decade or so, if you're lucky.

At the same time a "Pure" OpenGL 2.0 would be provided we would provide the full OpenGL 2.0 API in all its glory, fully backwards compatible, to serve the needs of all the other developers out there.

Barthold

Jon Leech (oddhack)
04-03-2004, 12:54 AM
Originally posted by barthold:
The idea is to provide an API based on OpenGL 1.x that is allowed to break backwards compatibility. This would allow for a thorough cleanup, if desired. This pure API would be aimed at developers that are willing to start from scratch, for whatever reason. One of the obvious uses for such an API is the embedded market...Well, we have already created an API for the embedded market based on OpenGL 1.x that breaks backwards compatibility - OpenGL ES 1.0. So to the extent that Neil busted his behind attracting so much of the cell phone (and more recently, handheld and console) industry into Khronos, I'd say 3Dlabs has been extremely successful in doing this - even if the resulting API doesn't look very much like the "OpenGL 2.0" white papers you guys were circulating a few years ago :-)

Jon (still unclear on why 3Dlabs is so interested in the mobile market when you've never had products for it - my pet theory is that you hope to escape the ATI/NVIDIA juggernaut by turning into a downmarket IP core provider ala Imagination - but I suppose we'll find out someday :confused: )

Ob1
04-03-2004, 10:42 AM
I feel a bit out of place posting here, as I'm brand new to openGL and I'm surely not as experienced as other people who posted here.

I just wanted to remind that, in general terms, the main purpose when developing software should be to make it as much effective and as less complex as possible. While I feel there are no doubts on openGL's effectivness, 12 years of incremental add-ons to the core are making it not so simple as it used to be.

I don't think it takes some software genius to understand that this trend, if still acceptable now, will most likely lead to bigger problems as the time passes by. That's why I really wish the API undergoes some radical redesign sooner or later, while leaving more competent people to think how to do that.

Good work pales!
Byez =8)!
Ob1

crystall
04-03-2004, 04:42 PM
Originally posted by barthold:
The idea is to provide an API based on OpenGL 1.x that is allowed to break backwards compatibility. This would allow for a thorough cleanup, if desired. This pure API would be aimed at developers that are willing to start from scratch, for whatever reason.And it would have been very cool. I started learning OpenGL roughly two years ago. When I saw the first docs speaking of OpenGL 2.0 which claimed pure programmable functionality and uniform handling of all objects (textures, vertices, etc...) I thought that learning OpenGL 1.x wasn't worth it, better wait for 2.0. It looked pretty much like a container which would hold all future features. Naturally things changed and I ended up with 1.x anyway but consider other people approaching OpenGL for the first time. The 1.x spec is enormous, there is plenty of redundant fixed functionality in it which is not useful since you end up with vertex/fragments shader anyway so why bother? It justs makes OpenGL harder to learn for beginners and harder to implement for new players in the gfx chip/driver market.

{T5K}DudeMiester
04-04-2004, 08:35 PM
Well you're talking a lot about people from my end of the spectrum (i.e. total n00bs), so I'll give my opinion.

First, I like OGL because it's elegant, and I want to learn OGL. However, I took one look at the extensions registry, and I was like "Holy Sh1t!" I heard about OGL 2.0, and I decided that the only reasonable way to lean OGL was to wait for 2.0. Of course, since the apparent origional focus of OGL 2.0 has been smothered in the politics of ARB I left with a very large amount of info to sort through to figure out what's important and what isn't. D3D is looking very attractive now. I think I'll still try to learn OGL, but without the freash new version 2.0 it's going to be a serious uphill battle.

I only hope, that someone has the courage to but OGL on a treadmill and burn off some of the flab. There is no compatibility issues so long as you keep OGL 2.0 in a new driver (OpenGL2_32.dll and OpenGL2_64.dll) and header files (GL2.h), and the legacy crap where it is now. Old programs that use the bloated (hmm... makes Microsoft products look simple) OGL 1.X just continue using the older version. Of course, that version is fixed now, and driver writers at the IHVs can focus on the new driver, which because it's simpler, they can optimise more.

And will someone do something to Microsoft so that they will support a new OGL version, so no one has to use the get extension commands. Like sue them for the abuse of their monopoly, because they are hindering OGL to support D3D. I'm sure you would be very successful.

JD
04-05-2004, 05:58 AM
I wouldn't worry too much about the extensions. After you spend time with them you'll find them much more sensible. You can trim out lot of those ext extensions so the list gets smaller. I find that both nv and ati share lots of the same extensions which makes things easier. So if I were you I would look up those arb extensions first then ext ones for things not in arb and lastly ihv ones. You'll find most functionality in arb extensions as many ext ones have been moved there for new gfx drivers.

mw
04-05-2004, 03:22 PM
Well, since everyone seems to have an opinion on this, I'll give my two cents too :^). Why is it that some people seem to dislike the huge amount of extensions? Nobody forces one to use them ... I feel that there is nothing wrong with staying with well defined (and documented) core functionality and use extensions as needed, since these are a great way of adding vendor specific functionality without "polluting" the core.

{T5K}DudeMiester
04-05-2004, 10:52 PM
I never said I didn't like them, just that every now and then it's necessary to do a little spring cleaning, so to speak. This is definitly a good time to do that. I suppose it would be ok though if they seperated the registry into useless crap (legacy stuff that exists only for backwards compatibility), and useful crap (Extensions that developers should care about).

I think the extensions are an excellent way to allow graphics to grow at it's own pace, rather then the pace dictated by say Microsoft.

Korval
04-05-2004, 11:47 PM
Why is it that some people seem to dislike the huge amount of extensions?Well, let's see. Would you rather use:

1: A horribly cluttered API that has 5 ways of doing everything, and the performance therein is all implementation dependent so you can't entirely know how to use it correctly without guess-and-test, or

2: A clean API that has precisely one way of doing something.

The rational choice, baring prior experience with one or the other, or some lack of functionality in one of them, is 2.

That's the problem that OpenGL faces with D3D. It is the cleaner API. While this may not affect the old guard who grew up with OpenGL and have been using it for 3+ years, this does affect any neophytes who may want to use it. Nowadays, the only real thing, besides the absolute best possible performance (and that's with a good driver), that OpenGL offers over D3D is functioning on non-Windows machines. It's pretty sad when the competing API can only boast about being compatible across multiple platforms.

maxuser
04-06-2004, 08:00 AM
4 quick points...

1. First of all, if as a GL newbie you're starting in the extensions registry, you're starting in the wrong place. Try the Red Book.

2. The problem isn't that there are too many extensions, it's that I don't know which systems I can expect to support them. Although the specs are typically thorough, I've never had luck finding documentation on what chipsets/drivers support which extensions. It's great that the latest CATALYST drivers support SmoothVision2.1, but I need specs that are developer friendly, not just marketing friendly. And it's certainly not giving away sensitive information to say whether that corresponds to ATI text fragment shaders, ARB fragment programs or GLSL fragment shaders, since as a developer I'm going to find out anyway, and pretty early on in development. So, I have to check as many systems as I can get my hands on in order to see which extensions are supported by which systems. As a developer I gladly take on the responsibility of determining whether an extension's support on a given chipset with a given driver is adequate for my app's purposes, but just having a list of nominally supported extensions for each chipset/driver should be widely available information that each developer shouldn't have to perform an archaeological dig to uncover. Apple, since they're in a convenient position to do so with their limited hardware/driver configurations, have such a list at http://developer.apple.com/opengl/extensions.html, (http://developer.apple.com/opengl/extensions.html) but I'm not aware of a Windows or Linux list. And if those lists exist, the only copy shouldn't be on Joe programmer's web site, but should be posted on opengl.org or by individual IHV's. And if opengl.org or IHV's do publish that info, please give me a link and I'll gladly shut up about it. :)

3. GL is analogous to ANSI C in that there is more standardized functionality than most people will ever use. This is a good thing. Although I can't remember the last time I used vsnprintf, I sleep well knowing that it's there. And although I don't plan to use it, I defend any developer's right to do so.

4. GL is analogous to ANSI C in that it's overdue for a backward-compatible yet forward-looking replacement. This is a bad thing. C got its replacement in C++ (complete with a new stdlib), but pure GL2 is no longer on the immediate drawing board. Along with a streamlined core, it would be nice to have higher levels of abstraction provided in a separate but standard library sanctioned by the ARB. I like abstraction not because I'm lazy, but because it provides more room for optimizations in the driver or on silicon, and leaves me to solve more interesting problems (OK, and sometimes because I'm lazy). For instance, without a mesh primitive, geometry optimizations are relegated to the world of triangles. There are countless abstractions for a collection of triangles, and it would be nice to have a standard set of those collections for a variety of purposes, just like STL does for C++ with vector, list, map, etc. Wouldn't it be great to have a higher-level library for triangle collections like Mesh, IndexedFaceSet, BSPTree and OctTree; or image loaders like JPG, PNG and TIFF? Sure these libraries already exist, but the great thing about standardization is that everyone speaks the same language when talking about these features; I can talk to any C++ programmer very succinctly about std::vector, but when it comes to something like a "triangle mesh" in OpenGL, there's a lot more explaining to do. (Is it implemented with a single VBO? What does the data structure look like? What operations does it support? Does it support LOD selection? How much of it can you typically see at once?) Since the universe of OpenGL apps is inherently *less* diverse than the universe of C/C++ apps (aside from non-C bindings for GL, the former universe is a subset of the latter), it's interesting that the more diverse universe has seen more standardization through ANSI and ISO committees than OpenGL through the ARB. But I'm not blaming the ARB, since it's just a collection of largely corporate interests. Developers, myself included, need to have more solidarity in pushing for standards, or even proposing them. The silent majority in the OpenGL community is the developers. There's a reason ATI, NVIDIA and 3DLabs spend inordinate sums of money woing developers: we write the apps that make their products look good. We need to exercise our voice in active and constructive ways. Whether it's Carmack or Stroustrup, all revolutions can point to individuals that made a stand and pushed their movement forward, while others did little more than focus on their complacent discontent with the status quo. So which are you, a revolutionary or a malcontent?

V-man
04-06-2004, 08:01 AM
Originally posted by {T5K}DudeMiester:
I never said I didn't like them, just that every now and then it's necessary to do a little spring cleaning, so to speak. This is definitly a good time to do that. I suppose it would be ok though if they seperated the registry into useless crap (legacy stuff that exists only for backwards compatibility), and useful crap (Extensions that developers should care about).

I think the extensions are an excellent way to allow graphics to grow at it's own pace, rather then the pace dictated by say Microsoft.If you want to learn about what's what, then you should get advice, read books (The extension guide), read tutorials ... not technical documents.


That's the problem that OpenGL faces with D3D. It is the cleaner API.Not much cleaner. For example, they have version numbers for their vertex and pixel(fragment) shaders, but there are differences between the versions. There is no backwards compatibility.
I'm assuming it will be the same with HLSL.

Anyway, noone said GL is THE most perfect API. GL2 won't be either, if anyone was waiting for that.

Corrail
04-06-2004, 08:15 AM
...but I'm not aware of a Windows or Linux list.
Try http://www.delphi3d.net/hardware/index.php

maxuser
04-06-2004, 08:28 AM
Originally posted by Corrail:


...but I'm not aware of a Windows or Linux list.
Try http://www.delphi3d.net/hardware/index.php Thanks for the link! I'll shut up now, but only about that point. :)

mw
04-06-2004, 10:50 AM
A horribly cluttered API that has 5 ways of doing everything, and the performance therein is all implementation dependent so you can't entirely know how to use it correctly without guess-and-testI think the OpenGL core is very clear and well documented - and having several ways of (for example) sending vertex data, each one useful under certain circumstances (well ok, I won't cry if compiled vertex arrays will vanish) is an advantage. Maybe it could be cleaned up, but I don't think that there are really that many outdated functions in it.

However, I won't blame OpenGL if several hardware vendors propose similar functionality with different extensions (and I don't think that this behaviour would change with OpenGL 2.0, since everyone wants to sell their products, giving them sounding names - and after all: different hardware will expose different functionality and performance under different circumstances, will it not?) - if it's useful enough, the ARB will grab it some time, and if I look only at the OpenGL core and ARB functionality, I think that I have a nice and clean API.

pkaler
04-06-2004, 11:28 AM
Originally posted by maxuser:
4. GL is analogous to ANSI C in that it's overdue for a backward-compatible yet forward-looking replacement. This is a bad thing. C got its replacement in C++ (complete with a new stdlib)BS, FUD, uninformed alert.

C99 is the last standardized version of C standardized in, you guessed it, 1999.

C++ is good. C++ is not a replacement for C. For some projects C is the better choice, for some projects C++ is the better choice.

Read Herb Sutter's article in this months C/C++ Users Journal. He outlines the work currently being done on the next versions of both C and C++.

Let me quote Stroustrup as well. "Remember that proof by analogy is fraud."

My personal view is that the core should not be touched but extensions should be deprecated. The PDF for the OpenGL 1.5 spec is 333 pages. That's fine. That's manageable and concepts are pretty easy to find.

maxuser
04-06-2004, 12:38 PM
Experience is all about recognizing patterns and analogies with past experiences and applying those lessons to the present where applicable. OpenGL has accomplished this through the work of the ARB. OpenGL 1.0 was replete with individual pieces of state that the user must manage and coordinate with other related pieces of state. Every version of OpenGL since then has introduced increasingly object-oriented state management: texture objects, VBO's, GLSL program objects.

This is precisely the direct analogy I'm making with the change in mainstream programming from C to C++, just as OpenGL itself has evolved from entirely state-based to more object-based. And it's not a fraudulous analogy, it's a general programming trend. (Another aspect of the analogy is "thicker" standard library support, but I'll leave that for another time...) Sure, loads of people still prefer C over C++, and I support their right to do so; but I think it's generally a good thing that developers have largely migrated to a language that natively supports the notion of objects, i.e. well-defined collections of data with well-defined operations on those data.

That's all I want for OpenGL, a new or revised incarnation (largely what GL2 was intended to be) that still supports the old state-based ways, but has core facilities for thinking, designing and implementing graphics apps in an object-oriented way. The analogy with C/C++ is certainly broken in many ways and like any analogy it can be taken too far. For instance, GL has evolved a notion of objects where C never has, so GL is clearly on the right path whereas C has always been steadfast about not strongly coupling data with functionality at a fundamental level.

If anyone is fearful, uncertain or doubtful about GL's future as a result of my previous post, I urge you to read it again with an open mind. That certainly wasn't my intent.

P.S. PK, I can get you a great discount on a brand new abacus if you're interested in the latest version of an outdated tool. ;)

crystall
04-06-2004, 03:29 PM
Originally posted by maxuser:
This is precisely the direct analogy I'm making with the change in mainstream programming from C to C++, just as OpenGL itself has evolved from entirely state-based to more object-based.Your analogy is correct IMHO but you are missing an important point. A lot of OO languages have emerged after C++ like Java/Ruby/Objective-C/whathever. And among all of those C++ is certainly not anymore the most popular, yeah it is still widely used, but it is getting dropped in favour of 'cleaner' languages. Why? Because it is awkward to use, because it is a hybrid between a functional language and an OO one, because keeping compatibility with C burdens it, etc... And that's exactly the same problem OpenGL 1.x has, the way you use it has changed towards programmability and object-orientation. So what's the reason for keeping the legacy inside the new spec? It just makes it awkward to learn and to implement, it bloats the implementation and it requires profiling because performance is not consistent across the different methods used to achieve the same results.

t0y
04-06-2004, 06:30 PM
Originally posted by maxuser:
1. First of all, if as a GL newbie you're starting in the extensions registry, you're starting in the wrong place. Try the Red Book.
You mean the original revision of the book, right? Just kidding. :D

Anyway, it'd be cool to have a brand new opengl core but I like the way GL is "cluttered". It's easy for newbies and it's easy to roll out quick tests.

The mess in GL is directly related to its ideal platform as a test-bed for new and specific functionality (IHV extensions). The ARB is too slow defining ARB extensions that provide general interfaces for different hardware. But that is changing.. :)

mw
04-06-2004, 07:00 PM
Well, if OpenGL did not expose a clean procedural API it would be much harder to integrate in different programming languages (and operating systems), since the underlying class system had to be exported as well (which does prove, in my opinion, that there are cases where a procedural approach is superior compared to an object oriented approach - just look at ODE, which is a great example of object oriented design, but the exposed API is "only" procedural).

How this is achieved internally, is not important - an OpenGL implementation can be (and most probably are) object oriented as hell, as long as the exposed API is procedural, it will be no problem to integrate it anywhere (and probably build another set of classes, however organized, with it) - to debate if OpenGL should be "object oriented" is meaningless therefore IMHO.

I think it would be more useful, if you would tell what exactly the "useless legacy" is, you want to be removed from the core specification: 8 Bit modes were mentioned somewhere (but why not keep them exposed in software - there will always be some embedded devices, which may get use of it, and usable code exists since the days of SGIs OpenGL 1.1 software implementation), compiled vertex arrays are something, I could live without (but keeping the spec won't hurt anyone), OpenGL lighting was mentioned (I'd hate to see that go - it's really easy to set up simple scenarios with it, which is quite important for people starting to learn OpenGL), evaluators are (for example) useful to build terrains with a small memory footprint - what is it, you'd like to see to vanish forever?

marco_dup1
04-07-2004, 12:53 PM
Originally posted by crystall:
Your analogy is correct IMHO but you are missing an important point. A lot of OO languages have emerged after C++ like Java/Ruby/Objective-C/whathever. And among all of those C++ is certainly not anymore the most popular, yeah it is still widely used, but it is getting dropped in favour of 'cleaner' languages. Why? Because it is awkward to use, because it is a hybrid between a functional language and an OO one, because keeping compatibility with C burdens it, etc... C++ is functional? You mean procedual. Is this bad? OO is only a design pattern. In Python for example all things are objects but there are also function(which are objects too). I don't believe there is only one way to do all things.

Dirk
04-07-2004, 12:57 PM
Originally posted by V-man:
HANDLE vs ID :
=====================
The spec gives some hints about design dicisions. The case for app assigned ID versus GL assigned handle for the shading spec. Read it if you wish.

It's not consistent with old GL, but ever since I started using GL, I wondered why the dicision was made to allow users to assign any ID we like to display lists and textures.
Because you can only create OpenGL object ids in the thread that has the active OpenGL context. If you are in a multi-threaded environment, only one thread can have the context bound, and that's usually not the thread that the application works in. For these situations it's much nicer to be able to manage the id handling totally separate from OpenGL in your own manager. Been there, doing that, pulling my hair about how to support the new model (not really, but it is more painful than the old way).

Probably some other systems have unique object ids anyway, and could use those as OpenGL ids, or id numbers could contain meaning or be derivable from other information about the object. Not a big deal, but a lot of OpenGL has been designed to make the developer's life easier. And it shows, OpenGL (core) really is very elegant, simple and flexible.

That also goes for many things that apparently a bunch of people consider useless, like immediate mode. The great thing about immediate mode is that it allows you to totally freely define where your data is coming from. Whatever format you store in the non-graphics part of your software, you can directly feed it to OpenGL. You can also do crazy stuff like multi-indexing, which is not possible without copying in VertexArrays and related variants.

Yes, you can implement all the different geometry specification methods using each other. But going from immediate mode to VBO can be a lot of work for the application programmer, of which we (hopefully) have much more than driver developers. So from an economic point of view it's better for the driver developers to spend time on this. I'm aware of the fact that driver developer time is a scarce resource, but if cass says supporting (at least most) of the old extensions is not a big deal, why am I to argue? Not to talk about backwards compatibility and software that was written 10 years ago and still works...

The success of OpenGL is partially due to its flexibility and simplicity, which lets people with whatever background use it easily for whatever purpose. Let's not throw that away.

Just my .02$

Dirk

Dirk
04-07-2004, 01:13 PM
Originally posted by PK:
I could go for a more formal deprecation protocol for extensions. It could be as simple as an ARB vote followed by moving the extension to a deprecated list in the extension registry.I'd support that idea. Have the ARB vote, move it out of the official list and add a comment at the top of the specification saying that it's deprecated and replaced by extension XXX. Check out the RFC system, that seems to work quite nicely.

It might also be an idea to do a check amongst the ARB members which extensions are not supported by anyone anymore. Those could be safely removed.

A little spring cleanup wouldn't hurt, just be gentle. :)

Dirk

V-man
04-07-2004, 06:15 PM
Originally posted by dirk:
Because you can only create OpenGL object ids in the thread that has the active OpenGL context. If you are in a multi-threaded environment, only one thread can have the context bound, and that's usually not the thread that the application works in. For these situations it's much nicer to be able to manage the id handling totally separate from OpenGL in your own manager. Been there, doing that, pulling my hair about how to support the new model (not really, but it is more painful than the old way). Sorry, I didn't understand why in a multithreaded environment, you want to manage the IDs yourself.


OpenGL (core) really is very elegant, simple and flexible.Agreed, but I'm not the one who wants to cut anything out. You should direct that to daveperman.

I for one want every possible concept out there be converted to an extension, and be put in the core, starting from GLSL (after a couple of revisions).


You can also do crazy stuff like multi-indexing, which is not possible without copying in VertexArrays and related variants.Tell me what your GPU is and I will tell you it can't handle multi indexing.

All GPUs are hard wired for certain vertex formats. Think of the driver as your guide dog.