PDA

View Full Version : A quote from - Mark J. Kilgard Principal System Software Engineer nVidia



marcClintDion
07-02-2013, 09:41 PM
... the notion that an OpenGL application is "wrong" to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for "deprecation" (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn't to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.

The truth is that modern OpenGL implementations are highly tuned at processing immediate mode; there are many simple situations where immediate mode is more convenient and less overhead that configuring and using vertex arrays with buffer objects.

http://www.slideshare.net/Mark_Kilgard/using-vertex-bufferobjectswell

//================================================== ================================

This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90's and has been doing so on behalf of one of the two biggest names in gaming hardware. With what that man said as a representative of nVidia, I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future so far as nVidia hardware and drivers are concerned. Now, I may be going out on a limb here by saying this but I suspect that AMD/ATI will be holding fast to this as well. My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.
One may also conclude from this that many other features of the OpenGL API that people are now afraid to use will not be going anywhere, nor should they.

Issues will arise for people that want to branch into mobile development if they are not careful with certain aspects the more "dated" API functions but it's also very likely that much of what is currently available in the broad OpenGL API will become increasingly available on handheld's, as their GPU's and driver models become more sophisticated. On desktops, OpenGL is almost fully backwards compatible going back 15 years. This is true for ATI, nVidia and even Intel has been following this model as best they can with their little purse-sized computers.

thokra
07-03-2013, 01:29 AM
I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future

It has already been dropped from core OpenGL. The only reason that the old stuff is still arond is GL_ARB_compatibility. This allows vendors to still support all the features in a single driver.


My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.

The actual safe bet is to simply use recent features. Although there's no indication as to when or whether major vendors will finally drop the old stuff, I personally hope they're eventually going to. On Linux, Intel does not expose GL_ARB_compatibility when you create a GL3.1 context - IIRC it's the same for Apple and Mac OSX.


will become increasingly available on handheld's

GLES2 has no fixed-function pipeline and no immediate mode. Neither does GLES3.

marcClintDion
07-03-2013, 06:18 AM
Personally, I'd like to see all of the indy developers that don't have huge budgets or teams have all the tools they need to succeed with their visions. More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people. The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn't necessary work as expected on a machine with a different GPU. It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU's.

I can't imagine why someone would want features striped out of an API just because this person does not care to use them. Personally I'm going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer • nVidia - Mark J. Kilgard. Those functions belong in there and what MJK says on this matter is likely the position of the entire development team at nVidia. I can't imagine why people would push to remove these features when one of the lead programmers for a long-standing major GPU manufacturer is saying that this should not happen.
Wait, yes I can imagine why..

I know of one person who is pushing for this, and I also know that this same person is selling a book on the newer API's. He likely views the tons of free open source material that's available to everyone as his direct competitor. He wants people to pay him instead of being able to learn for free.

thokra
07-03-2013, 07:14 AM
More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people.

No, more functions means a bloated specification, result in more effort to implement that specification and makes it more effort to test and optimize.


The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn't necessary work as expected on a machine with a different GPU.

And who is responsible for making implementations behave as they should? That's right: guys like MJK. Driver quality has always been an OpenGL problem - and you know why new features don't get well tested? In part, because people like you, who are relentlessly clinging to legacy stuff, just won't implement stuff using new features and thus cannot find bugs to report. Of course, even if you report bugs, there's no guarantee they will be fixed, especially if you're a hobbyist or indie developer. And even my company, which has good relations to both NVidia and AMD probably and is at the top of its field probably won't have a shot - then again, we're relying heavily on legacy code. A displeasing, but currently unchangeable fact.


It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU's.

Again, they can only fix bug that are found. A conformance test suite would help, but the ARB, and subsequently NVidia and AMD, don't dedicate time and money to develop such a thing. Anyway, an implementation is a black-box for an OpenGL developer and we rely on vendors to do their job right. If they always did, your argument couldn't even be brought up.


I can't imagine why someone would want features striped out of an API just because this person does not care to use them.

Well, how about this for a reason: The ARB itself decided to do so - so decision was carried by NVidia and AMD. We've had this topic a lot of times here on the forums and the conclusion always was that legacy code paths might be as fast as, or faster than certain core GL code paths - simply because the legacy stuff has been developed for decades and has reached an highly optimized state. That doesn't mean it's good.


Personally I'm going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer

In daily business, API refactoring, deprecation and removal is common - at least in a code base that has existed for over a decade. The reason is simply: at the time of conception, decision that were made might have made sense. If those reason don't exist anymore and using the API is cumbersome, not future-proof, or prone to errors it should be revamped.

Immediate mode is such an example IMHO. It was ok at the beginning but could be replaced with vertex arrays and VBOs fairly early. In general, sending a bunch of vertex attributes over the bus every time you render something is simply idiotic - especially if we're talking complex models of which there might be hundreds or thousands per frame. BTW, MJK says the same thing. The example he uses, a rectangle (or more generally "rendering primitives with just a few vertices"), is only valid for simple prototyping IMHO. Probably every rendering engine out there encapsulates state and logic for simple primitives in appropriate data structures, so uploading a unit quad to a VBO at application start-up isn't really problem once you've written the code.

The convenience argument is simply not good enough to defend immediate mode. The debugging argument is kinda ok - however, if you know what you're doing and have some experience, VBOs and VAOs are not hard to debug either. The performance argument is simply not valid. You cannot compare code paths which have not been tweaked and tested roughly the same amount.

EDIT: BTW, nowadays, where scenes consist of hundreds of thousands to millions of polygons per frame, wanting to keep immediate mode around, among other things, for a few simple primitives is simply hilarious. The same goes for fixed-function lighting - if someone's too incompetent to come up with a simple Gouraud shader if desired, they should just give up OpenGL altogether.

marcClintDion
07-03-2013, 08:56 PM
In part, because people like you, who are relentlessly clinging to legacy stuff.

I don't use legacy code. I was defending the rights of people who do. It's interesting that right after you made this demeaning comment about me, you went on to say that the people you work for are still using legacy code. So basically you are saying that I have the same point of view as the people who you are subservient to.

One could logically assume that since you don't want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe. You said
And even my company, which has good relations to both NVidia and AMD probably... you were stretching things a bit here. It is not your company, they hired you, and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.

thokra
07-04-2013, 03:26 AM
you made this demeaning comment about me

It wasn't meant to be demeaning. Granted, it might have sounded a little harsh. Still, that doesn't make it untrue.


I don't use legacy code. I was defending the rights of people who do.

But the people who do shouldn't do so anymore, if possible. If they're constrained by other business related factors, I'm the last person to accuse them of not going the extra mile. Still, a core driver and a legacy driver would be a much better solution IMHO. People who still need or, even if that doesn't make any sense to me, want to rely on legacy GL, they could do so with a legacy driver. However, I'm perfectly aware that it would put pretty much of a burden on the guys at NVIDIA and AMD. Thinking about it, if all vendors actually agreed on simply dropping support starting on day X, what are people going to do? Rewrite their whole rendering code in Direct3D because they're pissed off about the disappearance of legacy support? I don't think so. Breaking backwards compat is never a fun thing but sometimes I think it's necessary to take software to a higher level.


So basically you are saying that I have the same point of view as the people who you are subservient to. One could logically assume that since you don't want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe.

Nope, any technical novelty is pretty much embraced in principle around here. It's just the lack of time or fear to alienate customers that keep us from implementing them. Still, if I were asked to take a stand, I would take the same position as above - even to the people I'm subservient to. The fact is, I know that there's no room for improving this at the moment and yes, I'm in no position to demand we rewrite our whole rasterization code. However, that doesn't mean is wouldn't be a good idea.


and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.

Now that's just funny. Where did I demean any engineer? Does disagreeing equal demeaning now? I didn't state nothing that isn't true - if you disagree, feel free to have at me. And the people that hired me gave me a job in part because I have pretty solid understanding of modern OpenGL. And the fact I call it "my company" is simply a testament to my liking my job and identifying with my employer - not due to the fact I believe it isactually my company. How could anyone misunderstand that?

mhagain
07-04-2013, 03:44 PM
It's important to remember that Kilgard is viewing the world through NVIDIA-coloured glasses; of course NVIDIA would like it best if everyone wrote programs that worked best on their hardware (and the fact that they have a highly tuned immediate mode implementation going back to the last century means that this is one area they would support the continued use of) but that's not necessarily in the best interests of either developers or consumers. His technical credentials may well be impeccable, but he's still biased.

For a fairly good idea of the kind of driver complexities that can arise from continued support of immediate mode, have a read of this: http://web.cecs.pdx.edu/~idr/publications/ddc2006-opengl_immediate_mode.pdf. The actual direct topic of the document is not really relevant, and some of the points it raises (particularly wrt glMapBuffer, "array state containers" and instancing) are now outdated, but it does a great job of describing many of the weird corner cases and abuses that drivers need to deal with (and must support flawlessly because the GL spec requires it) when implementing immediate mode. Never mind consistent support of GL 4.x features; GL 1.x on it's own is a nightmare landscape of bear-traps and unexploded landmines.

This is exactly the problem that deprecation/removal sets out to solve. I don't know about you, but I'd certainly prefer if driver writers spent their time working on the stuff that really matters for a modern application rather than dealing with this kind of rubbish.

It's incredibly disingenuous to imply that drawing without immediate mode falls into the category of "new, experimental API functions" - vertex arrays have been available in core OpenGL since version 1.1 (1997!) and as an extension prior to that, VBOs in core since 1.5 (2003!) and likewise as an extension since before. I hope you didn't mean to give that implication, but it sure read that way.

Regarding dropping of other (or even all) legacy functionality, this is one of those theoretical objections that frequently come up but that don't even exist in the real world. I can say that with extreme confidence because a working real-world model of discarding legacy functionality (and even of completely throwing out the old API and redesigning a new one from scratch) already exists, is used, is popular and is proven to work in the field. It's called Direct3D (the fact that Direct3D drivers can be orders of magnitude more stable than OpenGL drivers just supports the assertion that this approach works). Seriously - this is a solved problem - you're just wasting your own time raising it as an objection.

Everybody wants OpenGL to evolve and improve, but clinging on to old rubbish that hinders that evolution and improvement is not the way to go about it. OpenGL didn't lose the API war through shenanigans; it lost it through design short-sightedness, through letting the hardware get ahead of the core API's capabilities, through squabbling in committees, through not giving developers features that they needed, and through fragmentation due to multiple vendor-specific extensions for doing the same thing. Wanting to retain legacy features at the expense of moving things forward (especially at a time when it's position could be strengthened again as Microsoft seem to be completely losing the plot with the two most recent evolutions of D3D) isn't being helpful.

marcClintDion
07-04-2013, 08:53 PM
EDIT: "sarcasm has been removed, now this post is mostly gone"

This "war" has been almost completely one-sided and it has been Microsoft behaving this way. Well, Microsoft and people in forums bickering about which API is better.

OpenGL is not going anywhere and it's only getting better as everyone's drivers become more robust and diverse.

Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc.... Just to name a few big hitters who are all firmly in the scene.

marcClintDion
07-04-2013, 10:57 PM
OpenGL ES 2.0 marked the first step towards the tomb of OpenGL if such a thing is even possible.

I am not concerned about myself as a developer, I have no problem at all with VBO's, VAO's, IndexBuffers, FBO's or even building a unique shader for every model I build. My run-time only uses these things.
I am not at all concerned about having to put together a custom matrix math library, I've already done that.

I am concerned about all the aspiring indie-developers that show up here hoping to have a quick easy start-up system that will bring them years ahead of the game. There are a lot of kids out there and even stay-at-home dads who want to do this, and now they have an extra 2-3 years of learning curve to deal with. This goes against the entire spirit of the free-to-learn open source community which has libraries upon libraries of free research material available for download.

Being able to access fixed-function in GLSL shaders is what makes OpenGL the best choice for beginners. To say otherwise is absurd. This feature puts shader programming into the hands of children, some of the more gifted ones anyways. Most of the people that show up here will not be able to do all these things on their own if OpenGL is gutted any further.

For people that are just starting out to have to not only learn to use a matrix math library but to also have to implement that library by hand is absurd, now combine this with having to learn all the various subtleties of passing variables and matrices to the GPU, things can soon become overwhelming for people who are new to all this.

There are a lot of people in this world who want to make a game, many of these indy games will enrich our lives; As more and more features are stripped from the OpenGL API, this dream for many people will fall further out of reach. Not only will we have lost variety, which is something that nurtures and encourages creativity but we will also have lost the treasure trove of information that has been amassed over the past 15 years.

I am concerned about all the people that are not going to have 5 years doing of things the easy way, before they have to jump into the deep end and learn to do it all themselves in a more efficient manner.

If you want an API that is constantly being gutted and rebuilt then go over to DirectX, Microsoft will love it, you'll be helping them to black-ball people into buying the latest operating system that they are selling.

So far as "modern" OpenGL goes. It is incredibly absurd to pack a cross-hairs model, which only consists of two or three line segments into a VBO with indices when immediate mode can be set up to do this almost instantly and with much overhead. The set-up alone makes this impractical. The run-time code overhead makes doing this impracticable.

Also, in the case of drawing bounding box outlines for visualizing and diagnosing collision detection algorithms, immediate mode is the only proper choice. Anything else would be bug-prone, over-done fluff.

Mark J. Kilgard was right when he said that people need to be educated on the proper uses of these easy to use, and powerful tools, people should not be told that they are wrong to use them.

This is like telling someone that they are backwards hill-billy's because they happen to own a hand-saw. Electric saws may be the choice for most situations but they are not necessarily the best choice for every situation.

GClements
07-05-2013, 01:39 AM
I am concerned about all the aspiring indie-developers that show up here hoping to have a quick easy start-up system that will bring them years ahead of the game.

If you want a quick easy start-up system, you use an off-the-shelf engine such as Unreal, Unity, etc.



There are a lot of kids out there and even stay-at-home dads who want to do this, and now they have an extra 2-3 years of learning curve to deal with.

More like an extra 2-3 weeks. If it takes you longer than that to transition from compatibility to core, you aren't ready to be making commercial games (note: "independent" doesn't mean "amateur").



Accessing fixed-function variables in GLSL shaders is what makes OpenGL the best choice for beginners. To say otherwise is absurd.

Being able to access fixed-function from a shader just means using a separate function for each variable rather than using glUniform() for everything.

The main advantage of the compatibility variables is the ability to have most of your client-side code work the same way with or without shaders, so it's easier to write code which uses shaders where available but still works with 1.x.



For people that are just starting out to have to not only learn to use a matrix math library but to also have to implement that library by hand is absurd, now combine this with having to learn all the various subtleties of passing variables and matrices

If you understand matrix math and can program, you can already implement most of the library and it shouldn't take more than a few hours (the actual matrices for rotation, scaling, perspective etc are all given in the online manual pages). The only bit that's even slightly complex is matrix inversion, which is only required for the normal matrix (assuming that your modelview matrix isn't orthogonal) and gluUnProject().

One of the main reasons why the matrix functions were deprecated was that they were largely pointless. For most real programs (i.e. not red-book examples), you need the matrices client-side for e.g. collision (using the OpenGL functions then extracting the matrices with glGetDoublev(GL_MODELVIEW_MATRIX) etc is somewhere between bad and horrendous in terms of performance). So you end up writing your own matrix functions anyhow (and not necessarily the same ones which OpenGL uses, e.g. rotation matrices are more likely to be generated from quaternions than from either Euler angles or axis-and-angle).

Someone who can't do this much for themselves is going to spend up to a week posting on the forums effectively asking for personalised tuition on everything from animation to parsing file formats to physics before realising that making a game is a few orders of magnitude more complex than they bargained for, and promptly giving up.

marcClintDion
07-05-2013, 02:08 AM
Personally, I'm a firm believer in encouraging new people to use the most painless, easy to use and configure API features available and they will have the best chance of succeeding, especially if they don't need fancy-pants methods.

Except for the dangers of unavoidable situations like the OpenGL ES 2.0 spec which severed the incredibly valuable link between the old and the new. Those devices support both, however those machines have such tight memory space and bandwidth constraints that this unfortunate situation is understandable and necessary.
Yes it is absolutely absurd to expect mobile devices to have 100 MB+ drivers packages that would allow for a robust and fully-featured OpenGL environment... for now!

In the future this is likely going to happen and they will soon all be able to give the Dynamic Duo of desktop machines a run for their money in shear diversity of API combinations available.

Please don't get me wrong here! Bang, exclamation point. In no way shape or form should any present or future development be made on 'immediate mode." That would be like carpenters investing time and money into developing new types of screws. There would be no point.

The nVidia documents that stongly indicates that no legacy features will be removed also state that no future consideration will be given to them. They have already been optimized and tested and refined. They will take up nobody else's time. There is no concern that research time and effort are being wasted on that stuff. They are not reinventing the wheel over and over again with legacy code. That legacy code ran on machines that are nothing but pocket watches compared to machines today. There is no way that stuff is running slower now than it did on crappy, old machines.

Legacy has not caught up in sheer raw, large scale performance, so what, why use a car in a situation where a bicycle will do?. Legacy will not go anywhere unless it is specifically conflicting with modern functions. Why should it? It takes me 5 minutes to download the absurdly large driver packages.

If you want to eliminate bugs from your code, the best way to do it is to always test your software on as many GPU's as possible, as often as possible. I keep two old junk laptops on hand for this very purpose. One is a very old, and very weak, x1150 mobile Radeon that my friend's girlfriend spilled juice on. The drivers for that machine are buggy to begin with. I know that if it runs on that machine then it will run on almost any computer that is newer than 5 years old.

I also keep a mobile Intel GPU machine that was made right when Intel finally caught up with ATI/nVidia shader model 2.0/3.0 hardware. I also know that if it works on this then it will work on everything without any fear of bugs creeping in on someone else's computer.

I also test using the WINE emulator on a regular basis when I've been making substantial changes using features that behave differently under different circumstances.

Using newer features such as floating point textures is a poop-field so far as truly cross-platform goes. What works beautifully on some cards cannot be implemented properly on another made by someone else.

To resolve this issue, we all have to work together to build a cheat-sheet that has input from hundreds of people that has all been tested on hundreds of machine configurations. Either that or we have to wait for the various manufactures to play catch-up with one another. We can wait for them to do it or we can do it ourselves. Then people will be able to use it safely and reliably, and the GPU maunfacturers will have clear, documented evidence that will help them eliminate bugs in their drivers and circuits.

OpenGL 4.0+ currently has a big problem since a five stage shader and all the accompanying features have a lot of kinks to be worked out. Most people that come here do not have huge teams of software designers and testers at their disposal to get this working consistently across many platforms.
We have to do this ourselves or there will once again be huge repositories of bug ridden code several years from now.

It would be better if we make listings of people's efforts with trial and error. Under many different circumstances.

For instance, "Which newer extensions are giving people problems, and on which machines?"
and also, "Which of the newer extensions are known to work consistently on all available platforms?"

If we create a repository of these basic facts then we will have helped to resolve this issue of GPU manufacturers being unwilling to share results with one another.

Bug testing something as seemingly simple as a floating point texture is out of the reach of most people since there are no reliable threads where people have listed what is working for them and what is not. The old standard of people listing their machine specs has all but disappeared. That's probably a good thing since I used to think these forums were nothing but horrible hardware and API flame-wars. This has changed a lot over the years and places like this have become more civilized and productive... usually.

After not bothering with forums for half a decade I can now honestly say that they are now doing people some good. Arrogant, demeaning attitudes have toned down a lot. This is a good thing. Learning this stuff will do a person no good if they are also learning to act like a condescending, arrogant know-it-all
jerk at the same time.

What we do not have is a comprehensive list of what works and what does not. We need a bug list, a cheat-sheet that spans back over 15 years of people's experience with OpenGL, the new and the old. All updated, current and with nearly bug-free solutions because it's all of our combined experience with these various API changes and additions over the years.

We have to do this ourselves, if I were to share some known bugs and pitfalls for beginners that none of them would likely find written anywhere unless they already knew what to look for, then it would be something the following.

It would start a "benefits and bugs" wiki page that looks something like the following.

//-================================================== ================
Section-> (Fixed-function tied to GLSL)
//------------------------------------------------------------------------------------------------------------------------
PRO's: Very easy to use. A beginner could write and configure animation and lighting shader's very easily.
//------------------------------------------------------------------------------------------------------------------------
CON:( Not yet possible on mobile devices!)
//------------------------------------------------------------------------------------------------------------------------
CON: Most source code for this style was written when only a few driver models were automatically performing casting.
There is a lot of very interesting source-code from that era, but even to this day, all of those horrible casting errors are still tripping
up a lot of the new player's in the GPU arena. Driver's on mobile devices can't handle that much crap being thrown at them.
//------------------------------------------------------------------------------------------------------------------------
PRO: The Dynamic Duo are champs at fixing these problems with very little overhead.
//------------------------------------------------------------------------------------------------------------------------
Specific Bug listing A_1: gl_frontmaterial.shininess will not yield consistent results across many GPU's. Apparently, different manufactures are using a different procedures behind the scenes for this one. It's the only one I've found to be unreliable for cross-hardware/platform of all the common ones.
//------------------------------------------------------------------------------------------------------------------------
CON: Any time that a fixed-function material or lighting variable is used in GLSL: All possible fixed function material and lighting parameters available to GLSL will be added to the compiled shader even if they are not all being used. This is not as bad as it sounds, this method was working reasonably fast enough back in the Radeon9800/nVidia FX days so it's not going to slow down something made in the last few years. It's not practical for mobile devices yet but will not trip up a modern machine in the least. Not so far as most people go in their first several years. There are bigger fish to fry.
//------------------------------------------------------------------------------------------------------------------------
PRO: Learning to pass in your own Uniform variables is an easy enough optimization to consider once you've finally gotten your feet wet and you are not feeling so overwhelmed.

//-================================================== ================================================== =============

If the manufacturers will not give us a modern , up-to-date, fully backwards compatible bug repository then we will have to do it ourselves.

Once a format for a wiki like this is decided upon, these bits of accumulated 'wisdom' can be posted to a wiki so people browsing the free repositories are not constantly stepping in poop.

Just think back to when you had various successes and problems with all the different methods over the years. Give the good and the bad, how did using a feature make your life easier as a beginner? How did you overcome the pitfalls that you ran into? Things like this built into a wiki will make OpenGL a force that will knock people's socks off, but only if it includes everything OpenGL from beginning to end, 15 years of backwards compatibility that should become rock-solid stable and easy to learn.

If I see any of it, I'll cut and paste it to a file that will eventually turn into a posting that can be attached to all the links of legacy open source code. Now that stuff won't be broken anymore. People will have instructions on how to fix it all when they use it.

All those older pages should not be thrown away, they could be made productive and useful again, with little effort on our parts.
//-----------------------------------------------------------------------------------------------------------------------------------------------
Tip: For a lot of GPU's, even today, 1 and 1.0 are not the same thing! Don't rely on the driver to fix that for you. Certainly don't expect the shader to always work if you ignore this because it happens to work on your computer.

GClements
07-05-2013, 03:06 AM
Yes it is absolutely absurd to expect mobile devices to have 100 MB+ drivers packages that would allow for a robust and fully-featured OpenGL environment... for now!
Capacity isn't the issue here. The issue is that mobile devices don't have any legacy code to run, so there's no need for them to support the legacy API.



The nVidia documents that stongly indicate that no legacy features will be removed also state that no future consideration will be given to them.

Herein lies the problem. While the legacy API as a whole may not disappear, you are increasingly going to be faced with an either-or choice. You can use the legacy API or you can use the new features, but not both.

Apple have already said that new features will not be added to the compatibility profile context, so if you want to use them you need to use a core profile context, where immediate mode and the fixed function pipeline don't exist.

Another issue is that interactions between newer features and the legacy API are frequently resolved in ways which make use of both impractical. E.g. newer features may have state which can't be pushed and popped, so frameworks which rely upon objects' render methods restoring the state preclude the use of newer features. Newer features may not be usable inside display lists (e.g. instanced rendering is prohibited).

marcClintDion
07-05-2013, 09:23 AM
Capacity isn't the issue here. The issue is that mobile devices don't have any legacy code to run, so there's no need for them to support the legacy API.

Every single last mobile device supports OpenGL 1.1. Almost every device in use has this general capability, people have been even been using this style of code to program HomeBrew apps for the Wii for many years now. Not everybody is interested in making games that attempt to look like big-budget CG movies. This misinformation is exactly like the flaming arguments that have people fighting over whether DirectX is better than OpenGL or whether or not AMD is better than nVidia. People fight over whether or not C# is better than C++.

Now this nonsense has turned into an OpenGL vs. OpenGL bicker fest. There is room for all of it and to teach people otherwise is wrong.

mhagain
07-05-2013, 04:16 PM
Every single last mobile device supports OpenGL 1.1.

No, they don't. Mobile devices use OpenGL ES which is a completely different API which just happens to be modelled on OpenGL. ES 1.0 and 1.1 are not comparable to OpenGL 1.1, no matter what you may wish to believe.

You made the same mistake earlier on when you said:


Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc....

This is misinformation. This is FALSE. This is damaging misinformation that is as bad as the infamous Wolfire blog post because all it does is serve to perpetuate lies. You're undermining your own argument because you're showing yourself up as someone who's prepared to use lies as a prop for that argument. If you want your position on this to be taken seriously you really need to stop doing that now.

GClements
07-05-2013, 07:36 PM
Mobile devices use OpenGL ES which is a completely different API which just happens to be modelled on OpenGL.
Also:

No version of OpenGL ES supports glBegin/glEnd; vertex arrays must be used.
OpenGL ES doesn't have a compatibility profile. 1.x has a fixed-function pipeline, 2.x has shaders, and never the twain shall meet (i.e. you can't mix the two).

WebGL is based upon OpenGL ES 2.x, i.e. it has no fixed-function pipeline. Additionally, it doesn't support client-side arrays (anything which can use a buffer object in desktop or embedded OpenGL must use a buffer object in WebGL).

marcClintDion
07-05-2013, 09:47 PM
You're undermining your own argument because you're showing yourself up as someone who's prepared to use lies as a prop for that argument.

This is the kind of person that you have proven yourself to be. You have just lied to everyone here by intentionally misquoting what I said.

You said

OpenGL didn't lose the API war through shenanigans; it lost it through design short-sightedness

Then I responded


OpenGL is not going anywhere and it's only getting better as everyone's drivers become more robust and diverse.

Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc.... Just to name a few big hitters who are all firmly in the scene.

You said that OpenGL lost a "war" and I responded that OpenGL is not going anywhere.
I did not say anything about those devices supporting immediate mode.


Now I'm going to address your lying by quoting you


You're undermining your own argument because you're showing yourself up as someone who's prepared to use lies as a prop for that argument. If you want your position on this to be taken seriously you really need to stop doing that now.

thokra
07-08-2013, 04:59 AM
This is the kind of person that you have proven yourself to be. You have just lied to everyone here by intentionally misquoting what I said.

You clearly stated that


Every single last mobile device supports OpenGL 1.1

which is not even partially incorrect, this is outright wrong. Yet you don't make any effort to show that you're not sure about it, like prefacing it with something like "I think". Well, I think everyone would take that as an intentional statement. An intentional statement which conveys false information is, by definition, a lie. Thanks for playing.


I did not say anything about those devices supporting immediate mode.

Do you have any transitive thinking capabilities? Ok, let's work this out. You said OpenGL, which you consider to be used on the following platforms


Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc....

isn't going anywhere. Looking at the list, I see at least 3 mobile platforms in there - don't know what subset of OpenGL the famous "etc." uses but what the hell.

Now, since you mentioned at least 3 mobile platforms and you stated that


Every single last mobile device supports OpenGL 1.1

So you imply that immediate mode is supported on those devices due to the fact that it's a OpenGL 1.1 core feature. This directly contradicts you saying


I did not say anything about those devices supporting immediate mode.


Now this nonsense has turned into an OpenGL vs. OpenGL bicker fest. There is room for all of it and to teach people otherwise is wrong.

No, it hasn't. Everyone participating here is unsupportive of your claims. Normally no one should give a brownie about such idle ramblings, but I for one regularly get pissed off at people trying to head back to the early- to mid-90s. All you get is more code and higher complexity which, to quote Bjarne Stroustrup, simply lead to "more bugs". If you really love legacy OpenGL, you have to let it go man.

bobvodka
07-09-2013, 08:44 AM
Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc.... Just to name a few big hitters who are all firmly in the scene.

The PS3 does NOT use OpenGL.
There is an OpenGL|ES wrapper but no one in their right mind touches it because it's too slow.

I do wish people would stop repeating this incorrectly...

kRogue
07-11-2013, 01:25 PM
At the risk of.. something I am going to add my 2 cents.

Here goes, on the deprecation stuff:

Removal of immediate mode is in general a good thing; the only loser is those getting started with OpenGL.. the removal just makes the getting started with OpenGL more of pain now
Removal of fixed function pipeline is borderline. The basic mentality to this is that chances are an implementation of the fixed function pipeline made by the vendor will likely be better than doing it via shaders. Additionally, for a large number of situations the fixed function pipeline gets the job done. On the other hand, the interface for multi-texturing in the fixed function pipeline is quite awful so I am glad to see it gone in addition all the state associated to the fixed function pipeline was a pain too.
Removal of QUAD primitive types was, IMO, a mistake. One can simulate it with a geometry shader, but that seems awfully silly. As a side note, OpenGL ES3 does NOT have geometry shaders.
Removal of client side arrays (i.e. non-buffer object backed index and vertex buffers) was IMO a mistake as well. The use case of vertex and index data changing from frame to frame got ickier. With client side arrays, the GL implementation did the right thing for you. Now we play, as Dark Photon has called it, buffer object Ouija board for streaming the data. As a side note, OpenGL ES2 and ES3 DO allow for client side arrays.
glLineWidth... this was weird. It was marked as deprecated but it is not removed. I am grateful it was not removed, but well......
Removal of display lists was not done correctly in my opinion. My reasoning is simple. With a display list, one could define and queue up rendering sequences easily. In an ideal world, the GL implementation did magicks to optimize it. That was great functionality. What did suck was how those commands (display lists) were defined, what would be *great* is a replacement for them.

mhagain
07-11-2013, 02:43 PM
In general I agree with kRogue's comments here, but do differ on a few points.

Immediate mode served a purpose other than just as an easy entry-point for learning. It was great for rapid prototyping and proof-of-concept work. Even in a scenario where the more traditional immediate mode is removed, I would have liked to have seen glBegin/glArrayElement/glEnd retained (immediate-mode indexing - yayyy!)

FFP is emulated by the driver via shaders in almost all hardware for close on 10 years. Some elements of FFP however remained useful (I'm thinking primarily fog here) and removing of them just made exponential shader explosion even worse. ARB assembly programs had the right idea.

Quads should have stayed.

Client-side arrays should have stayed (but via glVertexAttribPointer with the old glEnableClientState removed). I'm detecting a bit of "D3D envy" in the removal of these (and speaking of "D3D envy" it's tragicomic that in 2013 OpenGL still doesn't have a dynamic buffer object updating API as good as D3D's - no more driver hints! - give us explicitly requested behaviour that you're guaranteed to get instead, please - D3D has had this problem solved since 1999, for crying out loud, there's no need for modern GL to be so over-cautious about it).

Some hardware doesn't accelerate lines > 1 wide. Deprecating but not removing seems both a concession to that hardware and a cop-out for hardware that does.

Display lists were far too complex in the old API, with lots of weird edge cases and fiddly rules about what can and cannot be put into them (also refer to the document I posted on the previous page for some lovely examples of interaction between display lists and immediate mode). Agreed that a clean replacement for them would be nice.

GClements
07-12-2013, 03:35 AM
Removal of QUAD primitive types was, IMO, a mistake. One can simulate it with a geometry shader, but that seems awfully silly. As a side note, OpenGL ES3 does NOT have geometry shaders.

The main problem with quads was that their tessellation into triangles was unspecified, i.e. which diagonal was used for the split. It seems like it would have been simple enough to just specify the behaviour, although this might have had political ramifications (i.e. unless all drivers behaved the same way, any given choice would make some existing drivers "correct" and others "incorrect").



Removal of client side arrays (i.e. non-buffer object backed index and vertex buffers) was IMO a mistake as well. The use case of vertex and index data changing from frame to frame got ickier. With client side arrays, the GL implementation did the right thing for you.

The main problem with client-side arrays is that the implementation has to assume the least efficient scenario, i.e. that the entire contents of all arrays changes between every draw call. Forcing the use of buffers requires the user to specify the behaviour explicitly, rather than simply adopting the path of least effort (which is also the path of least efficiency).

GClements
07-12-2013, 04:09 AM
Display lists were far too complex in the old API, with lots of weird edge cases and fiddly rules about what can and cannot be put into them
Display lists are actually very simple. If you understand what they are, along with OpenGL's client-server model (which allows the two to be separated by a network connection), you can usually figure out whether or not a command can be put into one.

As a general principle, any feature which requires synchronisation between client state and server state can't go in a display list, as that would require glCallList() to be "executed" on both the client and server so that the states don't become desynchronised. E.g. glBindBuffer() can't go into a display list as the client needs to know whether a buffer is bound in order to determine whether "pointer" arguments to glVertexPointer() etc are pointers to data which should be sent with each draw command, or offsets into server-side buffer objects.

The fact that a display list can contain mismatched glBegin/glEnd commands, partial immediate-mode vertex specifications, etc is a consequence of display lists being nothing more than recording and playback of the command stream. Contrary to what might be inferred from "GL_COMPILE", implementations typically don't attempt to optimise the contents of a display list, they just store the commands verbatim.

A notable exception is that some OpenGL 1.0 implementations worked around the single-texture limitation (glGenTextures, glBindTexture etc were added in 1.1) by optimising display lists containing glTexImage() commands. But that's ancient history.

kRogue
07-12-2013, 05:23 AM
The main problem with client-side arrays is that the implementation has to assume the least efficient scenario, i.e. that the entire contents of all arrays changes between every draw call. Forcing the use of buffers requires the user to specify the behaviour explicitly, rather than simply adopting the path of least effort (which is also the path of least efficiency).


Indeed, a GL implementation essentially needs to "flush" the used vertices and indices from client side memory to GPU at each draw call, so yes it sucks. However, there was no need to throw the baby out with the bath water. There were several ways out, for example a set of additional calls giving the GL implementation "promises" that the client side data would not change until the pointer was changed for example. The catch being that that can end up walking a similar road as the current buffer object Ouija board we have now. Right now for streaming vertex data, it is.. a pain. Greater hilarity is in order for when the memory is unified. The current GL interface for buffer object data is an embarrassment when one sees that all one wants is to write the data into memory with the promise that one is not writing into that memory while the GPU is using it.



The main problem with quads was that their tessellation into triangles was unspecified, i.e. which diagonal was used for the split. It seems like it would have been simple enough to just specify the behaviour, although this might have had political ramifications (i.e. unless all drivers behaved the same way, any given choice would make some existing drivers "correct" and others "incorrect").


Or just provide a query what the behavior is and let it be undefined. There are plenty of bits in the GL specification that are undefined and left at the vendor's discretion on what to do. For example when a triangle edge passes exactly through the center of a pixel weather or not that pixel is rasterized; all that one has is that neighboring triangle sharing an edge shall not rasterize the same pixels twice.

kyle_
07-18-2013, 07:27 AM
The main problem with client-side arrays is that the implementation has to assume the least efficient scenario, i.e. that the entire contents of all arrays changes between every draw call.

No they don't. If they can sniff changes (or lack of them) they can be efficient (or at least, more efficient).
NV does such optimization on Windows iirc.

hlewin
07-22-2013, 12:55 AM
One big problem with client-side arrays in their current form is that their overall-size is unknown to the GL-implementation. One can easily cause access-violations using them, or - even worse - just pull data from random memory locations without anything special happening immediately because there is no possibility of bounds-checking (Not to say that GL-implementations should check the buffer-bounds). So, especially for embedded devices, it was the right decision not to support them imho as the principle requirement of a gl-call never to crash the program but to return with an error in the case of bad operation-parameters may not be possible to achieve if they are in use - this goes a little beyond my knowledge of different architectures. For WebGL the security holes that would potentially be introduced with them are dramatically - who knows what data would follow in memory after some typed-array in JavaScript ...

mhagain
07-22-2013, 02:01 AM
One big problem with client-side arrays in their current form is that their overall-size is unknown to the GL-implementation. One can easily cause access-violations using them, or - even worse - just pull data from random memory locations without anything special happening immediately because there is no possibility of bounds-checking (Not to say that GL-implementations should check the buffer-bounds). So, especially for embedded devices, it was the right decision not to support them imho as the principle requirement of a gl-call never to crash the program but to return with an error in the case of bad operation-parameters may not be possible to achieve if they are in use - this goes a little beyond my knowledge of different architectures. For WebGL the security holes that would potentially be introduced with them are dramatically - who knows what data would follow in memory after some typed-array in JavaScript ...

True, but the same applies to any GL call that requires a client-side memory pointer (or returns one that may be written to): glBufferData, glTexImage, glMapBuffer, etc.

thokra
07-22-2013, 02:17 AM
... glVertexAttribPointer by specifying values that aren't correct even with VBOs. In conjunction with a corresponding draw call, you can easily go out of bounds of the buffer object's data store ... but that's what GL_ARB_robustness is for. Even if the draw call specifies the correct values for the set of primitives to be rendered, incorrectly setup arrays can still lead to access violations - and vice versa.

In general, at any time you operate on memory with accesses that aren't explicitly checked, caught and handled if incorrect, you can get into trouble.

hlewin
07-22-2013, 04:09 AM
True, but the same applies to any GL call that requires a client-side memory pointer (or returns one that may be written to): glBufferData, glTexImage, glMapBuffer, etc.
It's true that those can be fed with wrong data and cause crashes too but with those arrays the errors are much harder to find. I can remember spending some time Debugging old code that made massive use of TexCoordPointers. Due to some code-change the Arrays didn't get disabled after that passage was done. This resulted in a situation where the program crashed an indefinate amount of time later when other (correct) drawing code eventually tried to draw arrays that were large enough to exceed the old texcoord-pointers Segment border or dedicated Memory block or whatever while quite a amount of small arrays was drawn successfully in between.

Alfonse Reinheart
07-22-2013, 04:41 AM
True, but the same applies to any GL call that requires a client-side memory pointer (or returns one that may be written to): glBufferData, glTexImage, glMapBuffer, etc.

That's not true. The size of memory that is passed to glBufferData or glBufferSubData is an explicit parameter. Yes, the user can get it wrong, but at least the size is there, as opposed to glVertexAttribPointer, where there is no size at all. The size of the buffer for a pixel transfer is implicitly specified based on a complex function of the parameters to the pixel transfer call (and a few global parameters). With mapping, again the size is either explicitly given or is implicit.

And most important of all, all of these functions will be done with the pointer upon their return (well, except for mapping, but that's a special case). So any modifications to client memory are local modifications. Everything you need to know to verify that you have provided sufficient memory should be right there.

glVertexAttribPointer's boundaries are defined only by the eventual draw call, which may be an indexed rendering call that could fetch from quite simply anywhere. And the fetching is non-local; indeed, the place where the render call happens could be very far away from where the initial setup happened.

This all leads to it being nearly impossible to verify via inspection that any particular use of client-side arrays is safe. You will have to trace through a lot of non-local code to make sure.

thokra
07-22-2013, 06:01 AM
glVertexAttribPointer's boundaries are defined only by the eventual draw call

I think this wording is misleading. Yes, I know what you mean, but you don't bound glVertexAttribPointer - the lower bound for fetching is established by glVertexAttribPointer. You can offset the lower bound but that doesn't change the fact that ultimately the lower bound is specified by the offset passed to glVertexAttribPointer. The upper bound is defined as a function of both the values specified by glVertexAttribPointer and the draw call. Your statement sounds as if a draw call could make up for an incorrectly specified offset in glVertexAttribPointer, for instance by specifying a negative first argument (which leads to undefined behavior) or by specifying negative indices (which will simply be converted to an unsigned integral type).

hlewin
07-22-2013, 06:25 AM
The upper bound is defined as a function of both the values specified by glVertexAttribPointer and the draw call.
Right. And that bound is (at least to some extend) given if the data resides in a BufferObject. But this makes things just a little better 'cause the the upper part of the buffer may still not be intended to draw. Having an opportunity to give the upper bound explicitely would likely ease error-tracking as the range of indices that can accidently be drawn without triggering a segv gets narrowed down.
On the other hand - if the attributes reside in a buffer a draw-call maybe deffered in another thread to a later Point of time which makes tracking more difficult. But at least there is an implied upper bound which gives the opportunity for the GL-implementation to return with an appropriate error-code more easily (I guess signals/exceptions had to be cought otherwise - I seldomly run crashing code outside the Debugger so I don't have that much experience when it comes to the default-behaviour in such situations).

thokra
07-22-2013, 06:50 AM
And that bound is (at least to some extend) given if the data resides in a BufferObject.

The bound is specified for client-side arrays and vertex buffer objects. Only the place from where you fetch is different, i.e. system-memory or VRAM.


But this makes things just a little better

It doesn't make anything better - at all. The upper bound can still be completely invalid - again, in both usage scenarios.


Having an opportunity to give the upper bound explicitely would likely ease error-tracking as the range of indices that can accidently be drawn without triggering a segv gets narrowed down.

But you have the opportunity. It's all right there. The problem is: you also have the opportunity to screw everything up. But that's inherently a problem of unchecked random memory access - it's nothing specific to OpenGL. That's why they specified GL_ARB_robustness (http://www.opengl.org/registry/specs/ARB/robustness.txt). When enabled, you at least avoid program termination but values are still undefined. One advantage is, that you probably see that fetching goes wrong and for which objects in your scene it goes wrong. This narrows down the choices when searching an error.


I seldomly run crashing code outside the Debugger so I don't have that much experience when it comes to the default-behaviour in such situations

I actually don't understand that sentence. The default behavior of a crashing application is a crash. That's actually not the worst that could happen. The worst would be a running application with bogus memory accesses and inconsistent state that runs in perpetuity.

hlewin
07-22-2013, 07:09 AM
The bound is specified for client-side arrays and vertex buffer objects. Only the place from where you fetch is different, i.e. system-memory or VRAM.
I guess you got me wrong there. When calling VertexAttribPointer there is no upper bound given in that API-call - if it refers to a buffer then a upper bound is implied by the buffer-size and the Offset and Stride Parameters.
I have a quite specific Scenario in mind: A VertexAttribPointer gets bound (be it in Client-Memory or a gl-buffer), enabled and for whatever reason does not get disabled. Another Scenario is an attrib-Pointer getting bound, enabled and an out-of-range index emitted (which is technically the first Scenario when the bug gets apparent). It is not that
values are still undefined makes me happy in that case. The implementation should return immediately with an appropriate error-code then. It was to narrow down the point as to when this is possible and what could be done to increase the error-spotting-ability of the GL-implementation to make debugging GL-applications easier - security considerations apart: If any of the driver's threads runs with elevated permissions or not is beyond my knowledge.

thokra
07-22-2013, 08:02 AM
then a upper bound is implied by the buffer-size and the Offset and Stride Parameters.

That goes for client-side arrays as well. There is always an implicit upper and lower bound. No matter where you allocate memory. The problem is moving past them explicitly with glVertexAttribPointer and draw calls.

For fun's sake I just checked a very wrong piece of code with a Radeon HD 7970 and Catalyst 13.4 on Linux. You can do the craziest crap, like render a billion points way out-of-bounds of the underlying data store and it will neither terminate, nor render nothing nor give you any hint via debug output. It will invoke the current shader program as expected and draw, with a somewhat random pattern, a square in the xz plane. That's cool.

Alfonse Reinheart
07-22-2013, 08:32 AM
That goes for client-side arrays as well. There is always an implicit upper and lower bound. No matter where you allocate memory. The problem is moving past them explicitly with glVertexAttribPointer and draw calls.

But OpenGL can verify the size of storage for buffer objects. It can detect when you're trying to fetch outside of the bound range and, given robust access, allow it to return an innocuous value rather than arbitrary memory or potentially crash.

You can't do that with an arbitrary client-side pointer, because the range information simply isn't there. All it has is the lower bound, not the upper.

thokra
07-22-2013, 09:04 AM
You can't do that with an arbitrary client-side pointer, because the range information simply isn't there. All it has is the lower bound, not the upper.

Yes, that's true.


The implementation should return immediately with an appropriate error-code then.

Do you mean in the context of robustness or in general?

hlewin
07-23-2013, 04:55 AM
Do you mean in the context of robustness or in general?
I mean the General case. Something like


BindBuffer...
BufferData...
VertexAttribPointer...
ArrayElement(1000000000)

should return with an error instead of Rendering crap. Notice the buffer. This is the case where this is easily possible - and for WebGL such bounds-check are a MUST imho. A segv/Access violation typically occurs only when the Memory-block dedicated to the whole process gets passed, which may be the whole Webbrowser-process with WebGL. Would be worth a try writing a WebGL-shader that searches for the data on the Website the shader is on with this.

carsten neumann
07-23-2013, 07:26 AM
WebGL checks indices are valid, see WebGL spec, #6.5 (https://www.khronos.org/registry/webgl/specs/1.0/#ATTRIBS_AND_RANGE_CHECKING).

hlewin
07-23-2013, 08:38 AM
Thats a good thing.

@thokra: You don't understand the principle argument. If I do NOT know what arb_robustness is chances are quite a degree higher that out-of-range accesses will happen to me. The spec should have been the other way around. I pretty much enjoy that it is enforced with WebGL.
arb_robustness is not even mentioned in the wglCreateContextAttribs-spec. An arb_uncheked_... would have been much more senseful. And the spec should have enforced to return with an error-code instead of rendering zeroes. That would point out such errors in place.
This may also be read as a critique of the absence of a complete-in-itself documentation but that's another story.

Sorry for taking this thread apart.

thokra
07-23-2013, 08:50 AM
You don't understand the principle argument.

I don't? You want universal bounds-checks, where applicable, without the need to create a robustness context, i.e. the default. Plus, since it should have been the other way around, one would need to create a non-robust context explicitly. That about right?


arb_robustness is not even mentioned in the wglCreateContextAttribs-spec.

It's all right here (http://www.opengl.org/registry/specs/ARB/wgl_create_context_robustness.txt) ...


Sorry for taking this thread apart.

This thread has been pretty much worthless in large part anyway. ;)

hlewin
07-23-2013, 08:58 AM
ARB_robustness seems to do a whole lot more than simply enable bounds-checking on array-calls, but besides that, yes. The defaults should imho be settings that make Debugging as easy as possible. When squeezing out the last bit of performance in optimization it is the right time to look for special context-flags.

Alfonse Reinheart
07-23-2013, 11:48 AM
The defaults should imho be settings that make Debugging as easy as possible. When squeezing out the last bit of performance in optimization it is the right time to look for special context-flags.

And that would have meant slowing down every program written before that. Many such programs aren't even supported anymore, so patches to restore their prior performance won't be forthcoming.

If we were talking about the GL 1.1 days, I might agree. But practical needs trump ideals. By default, OpenGL goes for performance because it has always done so.

hlewin
07-23-2013, 12:23 PM
That is really a good point I didn't think about, although I can remember having read the assurance of indices being checked etc. in the days I first started messing with OpenGL which was around 2000, I guess. I remember something like "Gl-calls never Crash the System". But that probably wasn't the official spec. Another point is that debugging security could be enforced by the gl-window-handling-frameworks which are typically used by tutorials etc.

thokra
07-24-2013, 03:31 AM
Another point is that debugging security could be enforced by the gl-window-handling-frameworks which are typically used by tutorials etc.

Could you elaborate on this? I really don't see what you're going for.

Alfonse Reinheart
07-24-2013, 04:07 AM
I think he's saying that tools like GLFW and FreeGLUT should make robustness the default, forcing you to use a switch to get faster performance.

thokra
07-24-2013, 04:30 AM
I think he's saying that tools like GLFW and FreeGLUT should make robustness the default, forcing you to use a switch to get faster performance.

At least I'm not alone. ;) This, however, would directly contradict your previous suggestion (which I completely agree with). Leave it at non-debug, non-robust as the default. Would be nice to have the option in FreeGLUT though.

hlewin
07-25-2013, 06:02 AM
Right and wrong. If looking at a learning courve there is
1. Windowing-Frameworks like glut
2. Happy to being able creating a context by oneself
3. Full Performance as Goal

Number 1 could be done if contributors to such Frameworks find the time (or People find the time to contribute) - this is related to
Number 2: The easiest way creating a context is - under Windows - wglCreateContext - which uses "Default" flags. And this ain't robust, so Number 1 are unlikely to be robust
When one reaches Number 3 - using context-flags, then one gets Debugging. Hopefully back to number 1.

Brandon J. Van Every
08-01-2013, 02:01 PM
N.B. From an ease of use standpoint I'd dearly like to design my own 3D HW using a sufficiently fast FPGA or some such, and completely ditch NVIDIA, AMD, Intel, and everyone in the industry's concerns. Maybe within the next 20 years we won't need GPUs anymore. Meanwhile, OpenGL is the design-by-committee approach, and the squabbling in this thread is a strategic artifact of that. DirectX is the proprietary approach and it has a somewhat cleaner API. Both are still limited by the consolidation of the IHV playing field however. With only 2 APIs, 3 IHVs, and a complex problem space, the engineering results are inevitably these huge macro crappy things. As seen from the standpoint of someone with more of a RISC aesthetic, that is.

Having said all that, I hope Khronos manages to get rid of as much of OpenGL as possible, because at least it's fewer cases to worry about and do driver development and testing for. Even if the resulting programming model is more difficult for novices.