Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 3 123 LastLast
Results 1 to 10 of 25

Thread: Rendering without Data: Bugs for AMD and NVIDIA

  1. #1
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Rendering without Data: Bugs for AMD and NVIDIA

    According to the OpenGL 3.3 core specification, it should be possible to bind a completely empty VAO (with all attributes disabled) and render with it with glDraw*Array. The vertex shader will get default values for any numbered inputs, gl_VertexID will be filled in, etc.

    According to the OpenGL 3.3 compatibility specification, it is not possible to do this. Compatibility specifically says that something must be bound to attribute 0 or glVertex, or rendering fails with GL_INVALID_OPERATION.

    Both AMD and NVIDIA get this wrong, though in different ways.

    On AMD, it fails in both core and compatibility. On NVIDIA it succeeds in both core and compatibility.

  2. #2
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,136

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    So, that is 2:0 for NVIDIA!

    I know that it works on NV, and I'm using it extensively. Although specification is something that prescribes what should be implemented, I like NVIDIA's pragmatic implementation.

    That's great that you have drawn attention to the fact that AMD doesn't support something that should. I'll rather change Compatibility specification than lose functionality.

  3. #3
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    I'll rather change Compatibility specification than lose functionality.
    And I might agree, except you can't change the compatibility specification. It's already written; it's done, it exists, and it cannot be changed (outside of making higher version numbers).

    I'm talking about conformance with what the spec actually says, not what we might want to have.

    And NVIDIA allowing it in compatibility is just as bad as AMD disallowing it in core.

    Specifications exist for a reason. Being too permissive is no better than being to restricted. Both of them are non-conforming.

    Indeed, I would go so far as to say being too permissive is worse. Being too restricted means that your code will still work on other implementations, even if it won't be as optimal. Being too permissive means that code will break.

    And then people who are used to that "functionality" will demand that a vendor who is conformant to the spec "fix" their implementation.

  4. #4
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,136

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    Quote Originally Posted by Alfonse Reinheart
    And I might agree, except you can't change the compatibility specification. It's already written; it's done, it exists, and it cannot be changed (outside of making higher version numbers).
    Of course that something written and published cannot be changed easily, but, as you've said, there will be future releases.


    Quote Originally Posted by Alfonse Reinheart
    And NVIDIA allowing it in compatibility is just as bad as AMD disallowing it in core.
    Strongly disagree!
    Compatibility profile should support all previous functionality as well as current (Core) functionality. Do you agree with that?
    So,
    Core supports attributeless rendering AND Compatibility must support all functionality AND Core is subset of the functionality => Compatibility must support attributeless rendering!

    Quote Originally Posted by Alfonse Reinheart
    Indeed, I would go so far as to say being too permissive is worse. Being too restricted means that your code will still work on other implementations, even if it won't be as optimal. Being too permissive means that code will break.
    Agree, if the portability is your primary goal.

  5. #5
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,180

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    Quote Originally Posted by Alfonse Reinheart
    And NVIDIA allowing it in compatibility is just as bad as AMD disallowing it in core.
    This cannot be emphasised enough. Allowing functionality in compatibility that should not work is bad bad bad, and removes the usefulness of NVIDIA as a platform for developing on.

    Quote Originally Posted by Aleksandar
    Agree, if the portability is your primary goal.
    ...or if having a reasonable guarantee that your program stands at least a decent chance of running on anything other than NVIDIA is any kind of a goal for you.

  6. #6
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    Of course that something written and published cannot be changed easily, but, as you've said, there will be future releases.
    Not any that support 3.3-class hardware. All future releases will focus on the GL 4.x line of hardware and above.

    The OpenGL Specification Version 3.3 will always say what it says now. And therefore, all OpenGL implementations on this hardware should conform to that.

    They might make an extension to expose this, but it's strange to make a compatibility-only extension.

    Compatibility profile should support all previous functionality as well as current (Core) functionality. Do you agree with that?
    You're trying to have a different conversation. The conversation you want to have is "what should the specification say?" That's not what I'm talking about. I'm talking about "what does the specification say?" Whether I agree about what the spec ought to say is irrelevant; the spec is what it is and it says what it says.

    What matters is that both AMD and NVIDIA are deficient in this regard. One is too restrictive, the other too permissive. Both answers are wrong.

    Agree, if the portability is your primary goal.
    If you happen to live in that tiny, sequestered bubble called "NVIDIA-OpenGL", fine. Be happy there. But anyone living in the rest of the world must accept the simple reality that their code will be run on non-NVIDIA hardware.

    And that NVIDIA-only world? It's slowly but surely getting smaller.

  7. #7
    Member Regular Contributor
    Join Date
    May 2001
    Posts
    348

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    Quote Originally Posted by Alfonse Reinheart
    Indeed, I would go so far as to say being too permissive is worse. Being too restricted means that your code will still work on other implementations, even if it won't be as optimal. Being too permissive means that code will break.
    Being too restricted means that code written against (the spec/a conformant implementation) will break. The situation is pretty much identical.

  8. #8
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    Being too restricted means that code written against (the spec/a conformant implementation) will break. The situation is pretty much identical.
    True. But specifications aren't implementations. You can't write your code against the specification. You can think you have. You can read your code carefully and believe you have done everything according to the spec. But the only way to know it is to actually run it. And that requires an implementation. This is why conformance tests are so important.

    If something that ought to work fails, you can chaulk it up to a driver bug. If something works as they expect it to, generally people don't question it. That's why permissiveness is more dangerous: it's easy to not know that you're using off-spec behavior.

    After all, how many people do you think actually know that 3.3 core allows you to render without buffer objects, while 3.3 compatibility does not? This is an esoteric (though useful in some circumstances) use case, one that's rarely if ever covered in documentation or secondary sources.

    People generally do not read specifications. They read secondary documentation (reference manuals, the Redbook, etc) or tertiary materials (online tutorials, someone else's code). They know what they've been exposed to. They know what they've been shown. And they know what their implementation tells them works or doesn't work.

    This means that, if you write code on a permissive implementation, it may not work on a conformant one. Whereas, if you write code on a restricted implementation, it will still work on the conformant one.

    Both non-conformant implementations are wrong, but only one leads to writing non-portable code.

  9. #9
    Senior Member OpenGL Pro Aleksandar's Avatar
    Join Date
    Jul 2009
    Posts
    1,136

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    Quote Originally Posted by Alfonse Reinheart
    Not any that support 3.3-class hardware. All future releases will focus on the GL 4.x line of hardware and above.
    You know very well that ARB created mess with releasing GL 3.3/4.0. All extensions introduced to the core of GL 4.1 are supported by SM4 hardware. It is a pretty frivolous story that new specs wouldn't support "older" hardware.

    Quote Originally Posted by Alfonse Reinheart
    You're trying to have a different conversation. The conversation you want to have is "what should the specification say?"
    Exactly! Why should we stick to something written as it is the Holy Scripture? Even in the specs there can be errors.

    You didn't disagree with my "logical statement", so it is true (and it should be if all premises are correct; it's a pure logic).

  10. #10
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948

    Re: Rendering without Data: Bugs for AMD and NVIDIA

    You know very well that ARB created mess with releasing GL 3.3/4.0. All extensions introduced to the core of GL 4.1 are supported by SM4 hardware. It is a pretty frivolous story that new specs wouldn't support "older" hardware.
    I don't understand the problem. Yes, most of the 4.1 set of features are supported by 3.x hardware. But they're also available as extensions. There's no need to make a GL 3.4 just to bring a few extensions into core. That's a waste of ARB time.

    Why should we stick to something written as it is the Holy Scripture? Even in the specs there can be errors.
    Because if you don't stick to what the spec actually says, you have chaos. The purpose of a specification is to have some reasonable assurance of conformance. That everyone who implements something is implementing the same thing, and that they should all provide exactly and only the described behavior.

    The purpose of a specification is not to suggest. It is not to imply. It is to state exactly and only what the legal commands are.

    You can want the specification to change. But until it does, non-conformance to it is wrong.

    You didn't disagree with my "logical statement", so it is true
    You fail logic forever. Not disagreeing with a statement does not make it true.

    I said that it was irrelevant for this conversation. The veracity of your statement is not the issue at hand.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •