Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 4 of 10 FirstFirst ... 23456 ... LastLast
Results 31 to 40 of 96

Thread: Official feedback on OpenGL 4.3 thread

  1. #31
    Intern Contributor
    Join Date
    Aug 2009
    Posts
    66
    Would like to see more removal of deprecated functions and stuff. Cleaning.
    We have Core and compatibility profile now. There is no reason to not remove/deprecate/clean old cruft form the core profile.

    For people who find that it is more trouble than it's worth:
    If it is too much work to replace obsolete functions and rebuild obsolete parts of algorithms. Then why the need to adapt the current program to a new OpenGL version? New features can also force rethinking of architecture.

    I was a bit disappointed when I saw the OpenGL ES spec had a section of legacy features.
    Because OpenGL ES 2.0 did not have backwards compatibility. Breaking it again would not have surprised many and would be expected.
    Please do not do backwards compatibility again with OpenGL ES. Doing this is simply not necessary and allows for a lean specification without a lot of old cruft that makes it more work to build conforming drivers.

  2. #32
    Advanced Member Frequent Contributor
    Join Date
    Dec 2007
    Location
    Hungary
    Posts
    985
    Quote Originally Posted by mhagain View Post
    The old API wouldn't let you do that without respecifying the full set of vertex attrib pointers; the new one lets you do it with a single BindVertexBuffer which - because stride and offset are separate state - can be much more efficient.
    That's a good point, I can accept that one. Though still not as flexible as programmable vertex fetching.

    Quote Originally Posted by Eosie
    Even though it might not make much sense to you from a theoretical standpoint, the reason the spec's been written like that is that it maps perfectly on the current hardware. There's no other reason. The stride is just part of the vertex buffer binding.

    Maps perfectly to what hardware? May fit one, may not fit another. Lots of extensions map well to some hardware but may be pretty inefficient on other. OpenGL tries to match what hardware does as best as possible, however, as usual, there is no one-fits-all design so just assuming that whatever OpenGL supports maps perfectly to any hardware supporting it is just too naive.
    Disclaimer: This is my personal profile. Whatever I write here is my personal opinion and none of my statements or speculations are anyhow related to my employer and as such should not be treated as accurate or valid and in no case should those be considered to represent the opinions of my employer.
    Technical Blog: http://www.rastergrid.com/blog/

  3. #33
    Member Regular Contributor
    Join Date
    Apr 2009
    Posts
    268
    Quote Originally Posted by aqnuep View Post
    Maps perfectly to what hardware?
    Probably to Khronos' members hardware, otherwise i guess that the feature would be vetoed from core spec.

  4. #34
    Junior Member Regular Contributor
    Join Date
    Jan 2004
    Location
    Czech Republic, EU
    Posts
    190
    Quote Originally Posted by aqnuep View Post
    Maps perfectly to what hardware? May fit one, may not fit another. Lots of extensions map well to some hardware but may be pretty inefficient on other. OpenGL tries to match what hardware does as best as possible, however, as usual, there is no one-fits-all design so just assuming that whatever OpenGL supports maps perfectly to any hardware supporting it is just too naive.
    Of course it doesn't map to all hardware in existence, but it maps exactly to AMD and NVIDIA hardware since GL3-capable chipsets. I don't really care about the rest.
    (usually just hobbyist) OpenGL driver developer

  5. #35
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,212
    There's also the case that it will make it easier to extend the vertex buffer API going forward (based on the assumption that it maps to, and continues to map to, hardware, of course) - which would result in cleaner, more robust drivers with fewer shenanigans going on behind the scenes. Plus it's setting clear precedent for something like a hypothetical GL_ARB_multi_index_buffers in a hypothetical future version (a similar API could be used) - and that's something I don't think even Alfonse could nitpick over.

  6. #36
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    This assumes that all of your VBO streams are going to be using the same stride or offset, which is not always the case. You may have a different VBO for texcoords as you have for position and normals, and you may only need to change the stride or offset for the position/normals VBO. The old API wouldn't let you do that without respecifying the full set of vertex attrib pointers; the new one lets you do it with a single BindVertexBuffer which - because stride and offset are separate state - can be much more efficient.
    That doesn't explain what this has to do with LODs.

    Furthermore, glDrawElementsBaseVertex already dealt with the "offset" issue quite well; you can just render with different indices, using a base index added to the indices you fetch. No need to make glVertexAttribPointer calls again.

  7. #37
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,212
    Quote Originally Posted by Alfonse Reinheart View Post
    That doesn't explain what this has to do with LODs.

    Furthermore, glDrawElementsBaseVertex already dealt with the "offset" issue quite well; you can just render with different indices, using a base index added to the indices you fetch. No need to make glVertexAttribPointer calls again.
    Mental note to self: don't pull random examples out of ass when the Spanish Inquisition are around.

    See my comment on the previous page for one case where a base index is insufficient.

  8. #38
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    New topic: internalformat_query2.

    There's one minor issue that's effectively a spec bug.

    Internalformat_query2 allows you to query GL_FRAMEBUFFER_RENDERABLE. All well and good. But it doesn't really say what it means for it to have FULL_SUPPORT. It just says:

    Quote Originally Posted by TheSpec
    The support for rendering to the resource via framebuffer attachment is returned in <params>
    What I mean is this: GL_FRAMEBUFFER_UNSUPPORTED is allowed to happen by the spec for the use of formats that aren't supported, or the use of a combination of formats that aren't be supported. However the spec sets aside a number of formats which are not allowed to return GL_FRAMEBUFFER_UNSUPPORTED. You can use any combination of any of these formats and the implementation is required to accept it.

    If I test a format that isn't on OpenGL's required list, and it returns FULL_SUPPORT, does that mean that I can never get UNSUPPORTED if I use it? No matter what? The spec doesn't say. The exact behavior is not detailed, only that it is "supported".

    I think section 9.4.3 should be amended as follows. It should be:

    Quote Originally Posted by Amended Spec
    Implementations must support framebuffer objects with up to MAX_COLOR_-ATTACHMENTS color attachments, a depth attachment, and a stencil attachment. Each color attachment may be in any of the required color formats for textures and renderbuffers described in sections 8.5.1 and 9.2.5. The depth attachment may be in any of the required depth or combined depth+stencil formats described in those sections, and the stencil attachment may be in any of the required combined depth+stencil formats. However, when both depth and stencil attachments are present, implementations are only required to support framebuffer objects where both attachments refer to the same image.

    Any internal format that offers FULL_SUPPORT from the FRAMEBUFFER_RENDERABLE query can be used in non-layered attachments in any combination with other required formats or formats that offer FULL_SUPPORT. Any internal format that offers FULL_SUPPORT from the FRAMEBUFFER_RENDERABLE_LAYERED may be used in any combination with other required formats or formats that offer FULL_SUPPORT.
    This would give the query some actual teeth, because right now, it's not clear what it means to fully support FRAMEBUFFER_RENDERABLE. Also, it explains what CAVEAT support means: that the format can be used in certain combinations with other formats, but it's not guaranteed to work in combination with all other fully supported formats. NONE obviously means that it can never be renderable no matter what.

    See my comment on the previous page for one case where a base index is insufficient.
    I think there's a basic failure to communicate here. I understand why we want to have a separation between buffer objects and vertex formats. I understand how that's useful.

    I don't understand why we need to have a separation between strides and vertex formats. That's my only problem with the extension: that the stride is with the buffer and not the format.

    Eosie suggests that it's a hardware thing, and I can understand that. However, NVIDIA's bindless graphics API also provides separation between format and buffers (well, GPU addresses, but effectively the same thing: buffer+offset). And yet there, they put the stride in the format.

    So why the discrepancy? I imagine that NVIDIA's bindless accurately describes how their hardware works, more or less.

  9. #39
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    First of all, thank you ARB. The spec seems much clearer in many respects.

    There is ClearBufferSubData, why is there not an analogue for textures? Or is it in the specification and I missed it?
    No, as far as I can tell there isn't. I find the whole extension strange and certainly confusing at least to the beginner. Try explaining why to use

    Code :
    glClearBufferData(GL_ARRAY_BUFFER, GL_R32F, GL_RED, GL_FLOAT, 0)

    to someone trying to initialize a vertex buffer with zeros instead of using glBufferData with an accordingly sized array of zeros. BTW, I hope I set this call up correctly.

    It's good to have a memset() equivalent for convenience and for performance reasons when resetting a buffer during rendering (i.e. not having to transfer a complete set of data instead of a single value) but currently I can't really imagine many convincing example uses that justifies introducing 2 (or 4) new APIs. If anyone has some please share.

  10. #40
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    currently I can't really imagine many convincing example uses that justifies introducing 2 (or 4) new APIs.
    There needs to be a SubData version for clearing part of a buffer. Sometimes, that's really what you want to do. At the same time, there should be a Data version for clearing the whole thing, without having to query its size.

    Now, why they bothered with the non-DSA versions when several extensions don't provide non-DSA versions... that's a good question.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •