Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 5 of 5 FirstFirst ... 345
Results 41 to 48 of 48

Thread: A quote from - Mark J. Kilgard Principal System Software Engineer nVidia

  1. #41
    Junior Member Regular Contributor
    Join Date
    Nov 2012
    Location
    Bremen, Germany
    Posts
    167
    ARB_robustness seems to do a whole lot more than simply enable bounds-checking on array-calls, but besides that, yes. The defaults should imho be settings that make Debugging as easy as possible. When squeezing out the last bit of performance in optimization it is the right time to look for special context-flags.

  2. #42
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    The defaults should imho be settings that make Debugging as easy as possible. When squeezing out the last bit of performance in optimization it is the right time to look for special context-flags.
    And that would have meant slowing down every program written before that. Many such programs aren't even supported anymore, so patches to restore their prior performance won't be forthcoming.

    If we were talking about the GL 1.1 days, I might agree. But practical needs trump ideals. By default, OpenGL goes for performance because it has always done so.

  3. #43
    Junior Member Regular Contributor
    Join Date
    Nov 2012
    Location
    Bremen, Germany
    Posts
    167
    That is really a good point I didn't think about, although I can remember having read the assurance of indices being checked etc. in the days I first started messing with OpenGL which was around 2000, I guess. I remember something like "Gl-calls never Crash the System". But that probably wasn't the official spec. Another point is that debugging security could be enforced by the gl-window-handling-frameworks which are typically used by tutorials etc.

  4. #44
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    Another point is that debugging security could be enforced by the gl-window-handling-frameworks which are typically used by tutorials etc.
    Could you elaborate on this? I really don't see what you're going for.

  5. #45
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    I think he's saying that tools like GLFW and FreeGLUT should make robustness the default, forcing you to use a switch to get faster performance.

  6. #46
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    Quote Originally Posted by Alfonse
    I think he's saying that tools like GLFW and FreeGLUT should make robustness the default, forcing you to use a switch to get faster performance.
    At least I'm not alone. This, however, would directly contradict your previous suggestion (which I completely agree with). Leave it at non-debug, non-robust as the default. Would be nice to have the option in FreeGLUT though.

  7. #47
    Junior Member Regular Contributor
    Join Date
    Nov 2012
    Location
    Bremen, Germany
    Posts
    167
    Right and wrong. If looking at a learning courve there is
    1. Windowing-Frameworks like glut
    2. Happy to being able creating a context by oneself
    3. Full Performance as Goal

    Number 1 could be done if contributors to such Frameworks find the time (or People find the time to contribute) - this is related to
    Number 2: The easiest way creating a context is - under Windows - wglCreateContext - which uses "Default" flags. And this ain't robust, so Number 1 are unlikely to be robust
    When one reaches Number 3 - using context-flags, then one gets Debugging. Hopefully back to number 1.

  8. #48
    Intern Contributor
    Join Date
    Mar 2010
    Location
    Winston-Salem, NC
    Posts
    62
    N.B. From an ease of use standpoint I'd dearly like to design my own 3D HW using a sufficiently fast FPGA or some such, and completely ditch NVIDIA, AMD, Intel, and everyone in the industry's concerns. Maybe within the next 20 years we won't need GPUs anymore. Meanwhile, OpenGL is the design-by-committee approach, and the squabbling in this thread is a strategic artifact of that. DirectX is the proprietary approach and it has a somewhat cleaner API. Both are still limited by the consolidation of the IHV playing field however. With only 2 APIs, 3 IHVs, and a complex problem space, the engineering results are inevitably these huge macro crappy things. As seen from the standpoint of someone with more of a RISC aesthetic, that is.

    Having said all that, I hope Khronos manages to get rid of as much of OpenGL as possible, because at least it's fewer cases to worry about and do driver development and testing for. Even if the resulting programming model is more difficult for novices.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •