Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Results 1 to 10 of 48

Thread: A quote from - Mark J. Kilgard Principal System Software Engineer nVidia

Hybrid View

  1. #1
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25

    A quote from - Mark J. Kilgard Principal System Software Engineer nVidia

    ... the notion that an OpenGL application is "wrong" to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for "deprecation" (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn't to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.

    The truth is that modern OpenGL implementations are highly tuned at processing immediate mode; there are many simple situations where immediate mode is more convenient and less overhead that configuring and using vertex arrays with buffer objects.

    http://www.slideshare.net/Mark_Kilga...ferobjectswell

    //================================================== ================================

    This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90's and has been doing so on behalf of one of the two biggest names in gaming hardware. With what that man said as a representative of nVidia, I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future so far as nVidia hardware and drivers are concerned. Now, I may be going out on a limb here by saying this but I suspect that AMD/ATI will be holding fast to this as well. My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.
    One may also conclude from this that many other features of the OpenGL API that people are now afraid to use will not be going anywhere, nor should they.

    Issues will arise for people that want to branch into mobile development if they are not careful with certain aspects the more "dated" API functions but it's also very likely that much of what is currently available in the broad OpenGL API will become increasingly available on handheld's, as their GPU's and driver models become more sophisticated. On desktops, OpenGL is almost fully backwards compatible going back 15 years. This is true for ATI, nVidia and even Intel has been following this model as best they can with their little purse-sized computers.
    Last edited by marcClintDion; 07-03-2013 at 12:47 AM.

  2. #2
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future
    It has already been dropped from core OpenGL. The only reason that the old stuff is still arond is GL_ARB_compatibility. This allows vendors to still support all the features in a single driver.

    My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.
    The actual safe bet is to simply use recent features. Although there's no indication as to when or whether major vendors will finally drop the old stuff, I personally hope they're eventually going to. On Linux, Intel does not expose GL_ARB_compatibility when you create a GL3.1 context - IIRC it's the same for Apple and Mac OSX.

    will become increasingly available on handheld's
    GLES2 has no fixed-function pipeline and no immediate mode. Neither does GLES3.
    Last edited by thokra; 07-03-2013 at 01:34 AM.

  3. #3
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25
    Personally, I'd like to see all of the indy developers that don't have huge budgets or teams have all the tools they need to succeed with their visions. More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people. The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn't necessary work as expected on a machine with a different GPU. It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU's.

    I can't imagine why someone would want features striped out of an API just because this person does not care to use them. Personally I'm going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer • nVidia - Mark J. Kilgard. Those functions belong in there and what MJK says on this matter is likely the position of the entire development team at nVidia. I can't imagine why people would push to remove these features when one of the lead programmers for a long-standing major GPU manufacturer is saying that this should not happen.
    Wait, yes I can imagine why..

    I know of one person who is pushing for this, and I also know that this same person is selling a book on the newer API's. He likely views the tons of free open source material that's available to everyone as his direct competitor. He wants people to pay him instead of being able to learn for free.

  4. #4
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people.
    No, more functions means a bloated specification, result in more effort to implement that specification and makes it more effort to test and optimize.

    The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn't necessary work as expected on a machine with a different GPU.
    And who is responsible for making implementations behave as they should? That's right: guys like MJK. Driver quality has always been an OpenGL problem - and you know why new features don't get well tested? In part, because people like you, who are relentlessly clinging to legacy stuff, just won't implement stuff using new features and thus cannot find bugs to report. Of course, even if you report bugs, there's no guarantee they will be fixed, especially if you're a hobbyist or indie developer. And even my company, which has good relations to both NVidia and AMD probably and is at the top of its field probably won't have a shot - then again, we're relying heavily on legacy code. A displeasing, but currently unchangeable fact.

    It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU's.
    Again, they can only fix bug that are found. A conformance test suite would help, but the ARB, and subsequently NVidia and AMD, don't dedicate time and money to develop such a thing. Anyway, an implementation is a black-box for an OpenGL developer and we rely on vendors to do their job right. If they always did, your argument couldn't even be brought up.

    I can't imagine why someone would want features striped out of an API just because this person does not care to use them.
    Well, how about this for a reason: The ARB itself decided to do so - so decision was carried by NVidia and AMD. We've had this topic a lot of times here on the forums and the conclusion always was that legacy code paths might be as fast as, or faster than certain core GL code paths - simply because the legacy stuff has been developed for decades and has reached an highly optimized state. That doesn't mean it's good.

    Personally I'm going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer
    In daily business, API refactoring, deprecation and removal is common - at least in a code base that has existed for over a decade. The reason is simply: at the time of conception, decision that were made might have made sense. If those reason don't exist anymore and using the API is cumbersome, not future-proof, or prone to errors it should be revamped.

    Immediate mode is such an example IMHO. It was ok at the beginning but could be replaced with vertex arrays and VBOs fairly early. In general, sending a bunch of vertex attributes over the bus every time you render something is simply idiotic - especially if we're talking complex models of which there might be hundreds or thousands per frame. BTW, MJK says the same thing. The example he uses, a rectangle (or more generally "rendering primitives with just a few vertices"), is only valid for simple prototyping IMHO. Probably every rendering engine out there encapsulates state and logic for simple primitives in appropriate data structures, so uploading a unit quad to a VBO at application start-up isn't really problem once you've written the code.

    The convenience argument is simply not good enough to defend immediate mode. The debugging argument is kinda ok - however, if you know what you're doing and have some experience, VBOs and VAOs are not hard to debug either. The performance argument is simply not valid. You cannot compare code paths which have not been tweaked and tested roughly the same amount.

    EDIT: BTW, nowadays, where scenes consist of hundreds of thousands to millions of polygons per frame, wanting to keep immediate mode around, among other things, for a few simple primitives is simply hilarious. The same goes for fixed-function lighting - if someone's too incompetent to come up with a simple Gouraud shader if desired, they should just give up OpenGL altogether.
    Last edited by thokra; 07-03-2013 at 07:23 AM.

  5. #5
    Junior Member Newbie
    Join Date
    Jun 2013
    Posts
    25
    Quote Originally Posted by thokra View Post
    In part, because people like you, who are relentlessly clinging to legacy stuff.
    I don't use legacy code. I was defending the rights of people who do. It's interesting that right after you made this demeaning comment about me, you went on to say that the people you work for are still using legacy code. So basically you are saying that I have the same point of view as the people who you are subservient to.

    One could logically assume that since you don't want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe. You said
    Quote Originally Posted by thokra View Post
    And even my company, which has good relations to both NVidia and AMD probably...
    you were stretching things a bit here. It is not your company, they hired you, and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.
    Last edited by marcClintDion; 07-03-2013 at 09:04 PM.

  6. #6
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    you made this demeaning comment about me
    It wasn't meant to be demeaning. Granted, it might have sounded a little harsh. Still, that doesn't make it untrue.

    I don't use legacy code. I was defending the rights of people who do.
    But the people who do shouldn't do so anymore, if possible. If they're constrained by other business related factors, I'm the last person to accuse them of not going the extra mile. Still, a core driver and a legacy driver would be a much better solution IMHO. People who still need or, even if that doesn't make any sense to me, want to rely on legacy GL, they could do so with a legacy driver. However, I'm perfectly aware that it would put pretty much of a burden on the guys at NVIDIA and AMD. Thinking about it, if all vendors actually agreed on simply dropping support starting on day X, what are people going to do? Rewrite their whole rendering code in Direct3D because they're pissed off about the disappearance of legacy support? I don't think so. Breaking backwards compat is never a fun thing but sometimes I think it's necessary to take software to a higher level.

    So basically you are saying that I have the same point of view as the people who you are subservient to. One could logically assume that since you don't want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe.
    Nope, any technical novelty is pretty much embraced in principle around here. It's just the lack of time or fear to alienate customers that keep us from implementing them. Still, if I were asked to take a stand, I would take the same position as above - even to the people I'm subservient to. The fact is, I know that there's no room for improving this at the moment and yes, I'm in no position to demand we rewrite our whole rasterization code. However, that doesn't mean is wouldn't be a good idea.

    and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.
    Now that's just funny. Where did I demean any engineer? Does disagreeing equal demeaning now? I didn't state nothing that isn't true - if you disagree, feel free to have at me. And the people that hired me gave me a job in part because I have pretty solid understanding of modern OpenGL. And the fact I call it "my company" is simply a testament to my liking my job and identifying with my employer - not due to the fact I believe it isactually my company. How could anyone misunderstand that?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •