Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 5 123 ... LastLast
Results 1 to 10 of 48

Thread: A quote from - Mark J. Kilgard Principal System Software Engineer nVidia

  1. #1
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25

    A quote from - Mark J. Kilgard Principal System Software Engineer nVidia

    ... the notion that an OpenGL application is "wrong" to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for "deprecation" (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn't to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.

    The truth is that modern OpenGL implementations are highly tuned at processing immediate mode; there are many simple situations where immediate mode is more convenient and less overhead that configuring and using vertex arrays with buffer objects.

    http://www.slideshare.net/Mark_Kilga...ferobjectswell

    //================================================== ================================

    This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90's and has been doing so on behalf of one of the two biggest names in gaming hardware. With what that man said as a representative of nVidia, I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future so far as nVidia hardware and drivers are concerned. Now, I may be going out on a limb here by saying this but I suspect that AMD/ATI will be holding fast to this as well. My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.
    One may also conclude from this that many other features of the OpenGL API that people are now afraid to use will not be going anywhere, nor should they.

    Issues will arise for people that want to branch into mobile development if they are not careful with certain aspects the more "dated" API functions but it's also very likely that much of what is currently available in the broad OpenGL API will become increasingly available on handheld's, as their GPU's and driver models become more sophisticated. On desktops, OpenGL is almost fully backwards compatible going back 15 years. This is true for ATI, nVidia and even Intel has been following this model as best they can with their little purse-sized computers.
    Last edited by marcClintDion; 07-03-2013 at 12:47 AM.

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future
    It has already been dropped from core OpenGL. The only reason that the old stuff is still arond is GL_ARB_compatibility. This allows vendors to still support all the features in a single driver.

    My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.
    The actual safe bet is to simply use recent features. Although there's no indication as to when or whether major vendors will finally drop the old stuff, I personally hope they're eventually going to. On Linux, Intel does not expose GL_ARB_compatibility when you create a GL3.1 context - IIRC it's the same for Apple and Mac OSX.

    will become increasingly available on handheld's
    GLES2 has no fixed-function pipeline and no immediate mode. Neither does GLES3.
    Last edited by thokra; 07-03-2013 at 01:34 AM.

  3. #3
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25
    Personally, I'd like to see all of the indy developers that don't have huge budgets or teams have all the tools they need to succeed with their visions. More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people. The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn't necessary work as expected on a machine with a different GPU. It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU's.

    I can't imagine why someone would want features striped out of an API just because this person does not care to use them. Personally I'm going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer • nVidia - Mark J. Kilgard. Those functions belong in there and what MJK says on this matter is likely the position of the entire development team at nVidia. I can't imagine why people would push to remove these features when one of the lead programmers for a long-standing major GPU manufacturer is saying that this should not happen.
    Wait, yes I can imagine why..

    I know of one person who is pushing for this, and I also know that this same person is selling a book on the newer API's. He likely views the tons of free open source material that's available to everyone as his direct competitor. He wants people to pay him instead of being able to learn for free.

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people.
    No, more functions means a bloated specification, result in more effort to implement that specification and makes it more effort to test and optimize.

    The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn't necessary work as expected on a machine with a different GPU.
    And who is responsible for making implementations behave as they should? That's right: guys like MJK. Driver quality has always been an OpenGL problem - and you know why new features don't get well tested? In part, because people like you, who are relentlessly clinging to legacy stuff, just won't implement stuff using new features and thus cannot find bugs to report. Of course, even if you report bugs, there's no guarantee they will be fixed, especially if you're a hobbyist or indie developer. And even my company, which has good relations to both NVidia and AMD probably and is at the top of its field probably won't have a shot - then again, we're relying heavily on legacy code. A displeasing, but currently unchangeable fact.

    It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU's.
    Again, they can only fix bug that are found. A conformance test suite would help, but the ARB, and subsequently NVidia and AMD, don't dedicate time and money to develop such a thing. Anyway, an implementation is a black-box for an OpenGL developer and we rely on vendors to do their job right. If they always did, your argument couldn't even be brought up.

    I can't imagine why someone would want features striped out of an API just because this person does not care to use them.
    Well, how about this for a reason: The ARB itself decided to do so - so decision was carried by NVidia and AMD. We've had this topic a lot of times here on the forums and the conclusion always was that legacy code paths might be as fast as, or faster than certain core GL code paths - simply because the legacy stuff has been developed for decades and has reached an highly optimized state. That doesn't mean it's good.

    Personally I'm going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer
    In daily business, API refactoring, deprecation and removal is common - at least in a code base that has existed for over a decade. The reason is simply: at the time of conception, decision that were made might have made sense. If those reason don't exist anymore and using the API is cumbersome, not future-proof, or prone to errors it should be revamped.

    Immediate mode is such an example IMHO. It was ok at the beginning but could be replaced with vertex arrays and VBOs fairly early. In general, sending a bunch of vertex attributes over the bus every time you render something is simply idiotic - especially if we're talking complex models of which there might be hundreds or thousands per frame. BTW, MJK says the same thing. The example he uses, a rectangle (or more generally "rendering primitives with just a few vertices"), is only valid for simple prototyping IMHO. Probably every rendering engine out there encapsulates state and logic for simple primitives in appropriate data structures, so uploading a unit quad to a VBO at application start-up isn't really problem once you've written the code.

    The convenience argument is simply not good enough to defend immediate mode. The debugging argument is kinda ok - however, if you know what you're doing and have some experience, VBOs and VAOs are not hard to debug either. The performance argument is simply not valid. You cannot compare code paths which have not been tweaked and tested roughly the same amount.

    EDIT: BTW, nowadays, where scenes consist of hundreds of thousands to millions of polygons per frame, wanting to keep immediate mode around, among other things, for a few simple primitives is simply hilarious. The same goes for fixed-function lighting - if someone's too incompetent to come up with a simple Gouraud shader if desired, they should just give up OpenGL altogether.
    Last edited by thokra; 07-03-2013 at 07:23 AM.

  5. #5
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25
    Quote Originally Posted by thokra View Post
    In part, because people like you, who are relentlessly clinging to legacy stuff.
    I don't use legacy code. I was defending the rights of people who do. It's interesting that right after you made this demeaning comment about me, you went on to say that the people you work for are still using legacy code. So basically you are saying that I have the same point of view as the people who you are subservient to.

    One could logically assume that since you don't want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe. You said
    Quote Originally Posted by thokra View Post
    And even my company, which has good relations to both NVidia and AMD probably...
    you were stretching things a bit here. It is not your company, they hired you, and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.
    Last edited by marcClintDion; 07-03-2013 at 09:04 PM.

  6. #6
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,099
    you made this demeaning comment about me
    It wasn't meant to be demeaning. Granted, it might have sounded a little harsh. Still, that doesn't make it untrue.

    I don't use legacy code. I was defending the rights of people who do.
    But the people who do shouldn't do so anymore, if possible. If they're constrained by other business related factors, I'm the last person to accuse them of not going the extra mile. Still, a core driver and a legacy driver would be a much better solution IMHO. People who still need or, even if that doesn't make any sense to me, want to rely on legacy GL, they could do so with a legacy driver. However, I'm perfectly aware that it would put pretty much of a burden on the guys at NVIDIA and AMD. Thinking about it, if all vendors actually agreed on simply dropping support starting on day X, what are people going to do? Rewrite their whole rendering code in Direct3D because they're pissed off about the disappearance of legacy support? I don't think so. Breaking backwards compat is never a fun thing but sometimes I think it's necessary to take software to a higher level.

    So basically you are saying that I have the same point of view as the people who you are subservient to. One could logically assume that since you don't want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe.
    Nope, any technical novelty is pretty much embraced in principle around here. It's just the lack of time or fear to alienate customers that keep us from implementing them. Still, if I were asked to take a stand, I would take the same position as above - even to the people I'm subservient to. The fact is, I know that there's no room for improving this at the moment and yes, I'm in no position to demand we rewrite our whole rasterization code. However, that doesn't mean is wouldn't be a good idea.

    and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.
    Now that's just funny. Where did I demean any engineer? Does disagreeing equal demeaning now? I didn't state nothing that isn't true - if you disagree, feel free to have at me. And the people that hired me gave me a job in part because I have pretty solid understanding of modern OpenGL. And the fact I call it "my company" is simply a testament to my liking my job and identifying with my employer - not due to the fact I believe it isactually my company. How could anyone misunderstand that?

  7. #7
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,136
    It's important to remember that Kilgard is viewing the world through NVIDIA-coloured glasses; of course NVIDIA would like it best if everyone wrote programs that worked best on their hardware (and the fact that they have a highly tuned immediate mode implementation going back to the last century means that this is one area they would support the continued use of) but that's not necessarily in the best interests of either developers or consumers. His technical credentials may well be impeccable, but he's still biased.

    For a fairly good idea of the kind of driver complexities that can arise from continued support of immediate mode, have a read of this: http://web.cecs.pdx.edu/~idr/publica...diate_mode.pdf. The actual direct topic of the document is not really relevant, and some of the points it raises (particularly wrt glMapBuffer, "array state containers" and instancing) are now outdated, but it does a great job of describing many of the weird corner cases and abuses that drivers need to deal with (and must support flawlessly because the GL spec requires it) when implementing immediate mode. Never mind consistent support of GL 4.x features; GL 1.x on it's own is a nightmare landscape of bear-traps and unexploded landmines.

    This is exactly the problem that deprecation/removal sets out to solve. I don't know about you, but I'd certainly prefer if driver writers spent their time working on the stuff that really matters for a modern application rather than dealing with this kind of rubbish.

    It's incredibly disingenuous to imply that drawing without immediate mode falls into the category of "new, experimental API functions" - vertex arrays have been available in core OpenGL since version 1.1 (1997!) and as an extension prior to that, VBOs in core since 1.5 (2003!) and likewise as an extension since before. I hope you didn't mean to give that implication, but it sure read that way.

    Regarding dropping of other (or even all) legacy functionality, this is one of those theoretical objections that frequently come up but that don't even exist in the real world. I can say that with extreme confidence because a working real-world model of discarding legacy functionality (and even of completely throwing out the old API and redesigning a new one from scratch) already exists, is used, is popular and is proven to work in the field. It's called Direct3D (the fact that Direct3D drivers can be orders of magnitude more stable than OpenGL drivers just supports the assertion that this approach works). Seriously - this is a solved problem - you're just wasting your own time raising it as an objection.

    Everybody wants OpenGL to evolve and improve, but clinging on to old rubbish that hinders that evolution and improvement is not the way to go about it. OpenGL didn't lose the API war through shenanigans; it lost it through design short-sightedness, through letting the hardware get ahead of the core API's capabilities, through squabbling in committees, through not giving developers features that they needed, and through fragmentation due to multiple vendor-specific extensions for doing the same thing. Wanting to retain legacy features at the expense of moving things forward (especially at a time when it's position could be strengthened again as Microsoft seem to be completely losing the plot with the two most recent evolutions of D3D) isn't being helpful.

  8. #8
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25
    EDIT: "sarcasm has been removed, now this post is mostly gone"

    This "war" has been almost completely one-sided and it has been Microsoft behaving this way. Well, Microsoft and people in forums bickering about which API is better.

    OpenGL is not going anywhere and it's only getting better as everyone's drivers become more robust and diverse.

    Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc.... Just to name a few big hitters who are all firmly in the scene.
    Last edited by marcClintDion; 07-05-2013 at 05:23 AM.

  9. #9
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25
    OpenGL ES 2.0 marked the first step towards the tomb of OpenGL if such a thing is even possible.

    I am not concerned about myself as a developer, I have no problem at all with VBO's, VAO's, IndexBuffers, FBO's or even building a unique shader for every model I build. My run-time only uses these things.
    I am not at all concerned about having to put together a custom matrix math library, I've already done that.

    I am concerned about all the aspiring indie-developers that show up here hoping to have a quick easy start-up system that will bring them years ahead of the game. There are a lot of kids out there and even stay-at-home dads who want to do this, and now they have an extra 2-3 years of learning curve to deal with. This goes against the entire spirit of the free-to-learn open source community which has libraries upon libraries of free research material available for download.

    Being able to access fixed-function in GLSL shaders is what makes OpenGL the best choice for beginners. To say otherwise is absurd. This feature puts shader programming into the hands of children, some of the more gifted ones anyways. Most of the people that show up here will not be able to do all these things on their own if OpenGL is gutted any further.

    For people that are just starting out to have to not only learn to use a matrix math library but to also have to implement that library by hand is absurd, now combine this with having to learn all the various subtleties of passing variables and matrices to the GPU, things can soon become overwhelming for people who are new to all this.

    There are a lot of people in this world who want to make a game, many of these indy games will enrich our lives; As more and more features are stripped from the OpenGL API, this dream for many people will fall further out of reach. Not only will we have lost variety, which is something that nurtures and encourages creativity but we will also have lost the treasure trove of information that has been amassed over the past 15 years.

    I am concerned about all the people that are not going to have 5 years doing of things the easy way, before they have to jump into the deep end and learn to do it all themselves in a more efficient manner.

    If you want an API that is constantly being gutted and rebuilt then go over to DirectX, Microsoft will love it, you'll be helping them to black-ball people into buying the latest operating system that they are selling.

    So far as "modern" OpenGL goes. It is incredibly absurd to pack a cross-hairs model, which only consists of two or three line segments into a VBO with indices when immediate mode can be set up to do this almost instantly and with much overhead. The set-up alone makes this impractical. The run-time code overhead makes doing this impracticable.

    Also, in the case of drawing bounding box outlines for visualizing and diagnosing collision detection algorithms, immediate mode is the only proper choice. Anything else would be bug-prone, over-done fluff.

    Mark J. Kilgard was right when he said that people need to be educated on the proper uses of these easy to use, and powerful tools, people should not be told that they are wrong to use them.

    This is like telling someone that they are backwards hill-billy's because they happen to own a hand-saw. Electric saws may be the choice for most situations but they are not necessarily the best choice for every situation.
    Last edited by marcClintDion; 07-04-2013 at 11:33 PM. Reason: Clarification: sentence re-structure

  10. #10
    Member Regular Contributor
    Join Date
    Jun 2013
    Posts
    474
    Quote Originally Posted by marcClintDion View Post
    I am concerned about all the aspiring indie-developers that show up here hoping to have a quick easy start-up system that will bring them years ahead of the game.
    If you want a quick easy start-up system, you use an off-the-shelf engine such as Unreal, Unity, etc.

    Quote Originally Posted by marcClintDion View Post
    There are a lot of kids out there and even stay-at-home dads who want to do this, and now they have an extra 2-3 years of learning curve to deal with.
    More like an extra 2-3 weeks. If it takes you longer than that to transition from compatibility to core, you aren't ready to be making commercial games (note: "independent" doesn't mean "amateur").

    Quote Originally Posted by marcClintDion View Post
    Accessing fixed-function variables in GLSL shaders is what makes OpenGL the best choice for beginners. To say otherwise is absurd.
    Being able to access fixed-function from a shader just means using a separate function for each variable rather than using glUniform() for everything.

    The main advantage of the compatibility variables is the ability to have most of your client-side code work the same way with or without shaders, so it's easier to write code which uses shaders where available but still works with 1.x.

    Quote Originally Posted by marcClintDion View Post
    For people that are just starting out to have to not only learn to use a matrix math library but to also have to implement that library by hand is absurd, now combine this with having to learn all the various subtleties of passing variables and matrices
    If you understand matrix math and can program, you can already implement most of the library and it shouldn't take more than a few hours (the actual matrices for rotation, scaling, perspective etc are all given in the online manual pages). The only bit that's even slightly complex is matrix inversion, which is only required for the normal matrix (assuming that your modelview matrix isn't orthogonal) and gluUnProject().

    One of the main reasons why the matrix functions were deprecated was that they were largely pointless. For most real programs (i.e. not red-book examples), you need the matrices client-side for e.g. collision (using the OpenGL functions then extracting the matrices with glGetDoublev(GL_MODELVIEW_MATRIX) etc is somewhere between bad and horrendous in terms of performance). So you end up writing your own matrix functions anyhow (and not necessarily the same ones which OpenGL uses, e.g. rotation matrices are more likely to be generated from quaternions than from either Euler angles or axis-and-angle).

    Someone who can't do this much for themselves is going to spend up to a week posting on the forums effectively asking for personalised tuition on everything from animation to parsing file formats to physics before realising that making a game is a few orders of magnitude more complex than they bargained for, and promptly giving up.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •