Part of the Khronos Group
OpenGL.org

The Industry's Foundation for High Performance Graphics

from games to virtual reality, mobile phones to supercomputers

Page 1 of 2 12 LastLast
Results 1 to 10 of 14

Thread: A new spec means more begging.

  1. #1
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    592

    A new spec means more begging.

    Naturally, the best time to beg is sooner. My beg list:
    1. Put into core GL_ARB_bindless_texture.
    2. Make an analogue for GL_NV_shader_buffer_load and GL_NV_shader_buffer_store.. I think that also ARB-ing GL_NV_vertex_buffer_unified_memory would be nice too.



    Last edited by kRogue; 07-22-2013 at 10:58 AM. Reason: The difference between 6xx and GeForce 6

  2. #2
    Member Regular Contributor malexander's Avatar
    Join Date
    Aug 2009
    Location
    Ontario
    Posts
    319
    Put into core GL_ARB_bindless_texture. One thing that is odd, over on the nvidia download page for GL4.4 drivers, it says that GeForce6 and up can do ARB_bindless_texture. Is that right?? I thought that that feature was Kepler only...
    It is indeed. But I believe they mean GEForce 600 series and up, not the old 6000 series (thanks marketing!).

  3. #3
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    592
    You are right, they wrote 6xx and they mean 6xx (Kepler) and not 6xxx. Shudders.

  4. #4
    Junior Member Regular Contributor
    Join Date
    Aug 2006
    Posts
    226
    It would be nice if BindBufferRange could accept any of the buffer targets, not just the 4 it does at the moment (GL_ATOMIC_COUNTER_BUFFER, GL_SHADER_STORAGE_BUFFER, GL_TRANSFORM_FEEDBACK_BUFFER and GL_UNIFORM_BUFFER).

    Regards
    elFarto

  5. #5
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    It only accepts those four targets because those are the only four indexed targets.

  6. #6
    Junior Member Regular Contributor
    Join Date
    Aug 2006
    Posts
    226
    Then add a similar function for non-indexed targets, although I can't see a problem using GL_ELEMENT_ARRAY_BUFFER with an index of 0 and just change the semantics of BindBuffer to only effect index 0.

    Regards
    elFarto

  7. #7
    Senior Member OpenGL Guru
    Join Date
    May 2009
    Posts
    4,948
    Let's look at every non-indexed target:

    GL_ARRAY_BUFFER: It has special interactions with glVertexAttribPointer. The range of the buffer accessed is based on the vertex format and the list of vertices rendered. None of which are specified by glVertexAttribPointer. Also, it's already been superseded by glBindVertexBuffer. Notably, that function doesn't take a size of the range, and it takes a stride.

    GL_ELEMENT_ARRAY_BUFFER​: Same as the previous; the range is defined by a glDraw*Elements call. You don't want to have to re-bind the buffer just to render from different parts of it.

    GL_COPY_READ_BUFFER​ and GL_COPY_WRITE_BUFFER​: The glCopyBufferSubData function already takes a range to copy, so there's no point in binding a range on top of that.

    GL_PIXEL_UNPACK_BUFFER​ and GL_PIXEL_PACK_BUFFER​: Same problem as the others: the functions that use the buffers (glTex(Sub)Image, glReadPixels) implicitly compute a range to copy from/to. They also take a start byte index. So again, there's really no point.

    GL_TEXTURE_BUFFER​: This binding point has absolutely no semantics on it. Besides offering a hint that you intend to use the buffer with a buffer texture, there's no reason to bind to it.

    GL_DRAW_INDIRECT_BUFFER​ and GL_DISPATCH_INDIRECT_BUFFER​: Same as the others. The functions that use them already take a range.

    So any such range bindings would be pointless. At best, they'd be redundant information. At worst, what you're doing is really just passing variables through global variables. Which as I understood, was not a good aspect in the OpenGL API.

    There's simply no reason to do it.

  8. #8
    Advanced Member Frequent Contributor
    Join Date
    Apr 2009
    Posts
    592
    One thing that kind of was a WTF moment for me was that glBindVertexBuffer has the stride instead of glVertexAttribFormat; I would not mind a glVertexAttribFormatStride call that takes a stride value that takes precedence over the value in glBindVertexBuffer. Oh well.

  9. #9
    Senior Member OpenGL Pro
    Join Date
    Apr 2010
    Location
    Germany
    Posts
    1,128
    And as always: standardize DSA across the board. Pretty please. Making functionality in extensions that have been promoted to core depend on GL_EXT_direct_state_access is so not cool.

    As MJK just recently put it here (albeit in a different context pointed to by Alfonse):

    Quote Originally Posted by MJK
    I think the much better solution is to incorporate the existing proven DSA mechanisms into core GL standards. While vendors may argue about this, ISVs are simply using the proven DSA interfaces in their code. In larger part this problem is created by the lack of standardization of DSA interfaces. It creates a lot of frustration for developers when the ARB chooses some functions to provide selector free interfaces (e.g. glProgramUniform* in OpenGL 4.1, etc. but not others).
    Go ARB!

  10. #10
    Senior Member OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,201
    Full DSA will never go core; the ARB are just not going to bring deprecated stuff like glMatrixLoadfEXT into a new core GL_VERSION.

    Some of DSA has already gone core, such as the glProgramUniform calls. Some newer functionality - such as sampler objects - were specified with a DSA API from the outset. Some newer functionality - such as vertex attrib binding - removes the need for DSA.

    Right now a cleanup of the texture object API is needed much more badly than DSA; there's stuff in there going back to GL1.0 and it's a mess. It can be respecified in a DSA manner (or in a manner that removes the need for DSA) and that would be enough.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •