A new spec means more begging.

Naturally, the best time to beg is sooner. My beg list:
[ol]
[li]Put into core GL_ARB_bindless_texture.[/li][li]Make an analogue for GL_NV_shader_buffer_load and GL_NV_shader_buffer_store… I think that also ARB-ing GL_NV_vertex_buffer_unified_memory would be nice too.[/li][/ol]

Put into core GL_ARB_bindless_texture. One thing that is odd, over on the nvidia download page for GL4.4 drivers, it says that GeForce6 and up can do ARB_bindless_texture. Is that right?? I thought that that feature was Kepler only…

It is indeed. But I believe they mean GEForce 600 series and up, not the old 6000 series (thanks marketing!).

You are right, they wrote 6xx and they mean 6xx (Kepler) and not 6xxx. Shudders.

It would be nice if BindBufferRange could accept any of the buffer targets, not just the 4 it does at the moment (GL_ATOMIC_COUNTER_BUFFER, GL_SHADER_STORAGE_BUFFER, GL_TRANSFORM_FEEDBACK_BUFFER and GL_UNIFORM_BUFFER).

Regards
elFarto

It only accepts those four targets because those are the only four indexed targets.

Then add a similar function for non-indexed targets, although I can’t see a problem using GL_ELEMENT_ARRAY_BUFFER with an index of 0 and just change the semantics of BindBuffer to only effect index 0.

Regards
elFarto

Let’s look at every non-indexed target:

GL_ARRAY_BUFFER: It has special interactions with glVertexAttribPointer. The range of the buffer accessed is based on the vertex format and the list of vertices rendered. None of which are specified by glVertexAttribPointer. Also, it’s already been superseded by glBindVertexBuffer. Notably, that function doesn’t take a size of the range, and it takes a stride.

GL_ELEMENT_ARRAY_BUFFER​: Same as the previous; the range is defined by a glDraw*Elements call. You don’t want to have to re-bind the buffer just to render from different parts of it.

GL_COPY_READ_BUFFER​ and GL_COPY_WRITE_BUFFER​: The glCopyBufferSubData function already takes a range to copy, so there’s no point in binding a range on top of that.

GL_PIXEL_UNPACK_BUFFER​ and GL_PIXEL_PACK_BUFFER​: Same problem as the others: the functions that use the buffers (glTex(Sub)Image, glReadPixels) implicitly compute a range to copy from/to. They also take a start byte index. So again, there’s really no point.

GL_TEXTURE_BUFFER​: This binding point has absolutely no semantics on it. Besides offering a hint that you intend to use the buffer with a buffer texture, there’s no reason to bind to it.

GL_DRAW_INDIRECT_BUFFER​ and GL_DISPATCH_INDIRECT_BUFFER​: Same as the others. The functions that use them already take a range.

So any such range bindings would be pointless. At best, they’d be redundant information. At worst, what you’re doing is really just passing variables through global variables. Which as I understood, was not a good aspect in the OpenGL API.

There’s simply no reason to do it.

One thing that kind of was a WTF moment for me was that glBindVertexBuffer has the stride instead of glVertexAttribFormat; I would not mind a glVertexAttribFormatStride call that takes a stride value that takes precedence over the value in glBindVertexBuffer. Oh well.

And as always: standardize DSA across the board. Pretty please. Making functionality in extensions that have been promoted to core depend on GL_EXT_direct_state_access is so not cool.

As MJK just recently put it here (albeit in a different context pointed to by Alfonse):

Go ARB!

Full DSA will never go core; the ARB are just not going to bring deprecated stuff like glMatrixLoadfEXT into a new core GL_VERSION.

Some of DSA has already gone core, such as the glProgramUniform calls. Some newer functionality - such as sampler objects - were specified with a DSA API from the outset. Some newer functionality - such as vertex attrib binding - removes the need for DSA.

Right now a cleanup of the texture object API is needed much more badly than DSA; there’s stuff in there going back to GL1.0 and it’s a mess. It can be respecified in a DSA manner (or in a manner that removes the need for DSA) and that would be enough.

Full DSA will never go core; the ARB are just not going to bring deprecated stuff like glMatrixLoadfEXT into a new core GL_VERSION.

I didn’t say they should move GL_EXT_direct_state_access into core, did I? I said they should standardize DSA - obviously this requires that DSA is applicable to core APIs. I just don’t want new functionality, like GL_ARB_texture_storage partly depend on an extrension that’ll never make it into core in its current form. And I really don’t see how cleaning up the tex object API and providing DSA functions is orthogonal.

This was discussed at some length during the aftermath of 4.3; indications are that this is the kind of usage that hardware prefers. It also makes the API more robust by preventing you from doing crazy things like this:

glBindBuffer (GL_ARRAY_BUFFER, buffer);
glVertexAttribPointer (0, 3, GL_FLOAT, GL_FALSE, 20, (void *) 0);
glVertexAttribPointer (1, 2, GL_FLOAT, GL_FALSE, 17, (void *) 12);

By explicitly linking stride to a buffer, you ensure that all attribs coming from that buffer are specified with the same stride.

The main use case I have for having the stride not linked to the buffer is when a single buffer object is used to hold data for multiple meshes of different format. In that use case, being able to specify the stride at format specification is nice. On the other hand, one can claim I am being lazy, how hard is to bind again eh? But nevertheless, the current API somewhat prevents the foot murder; I just advocate adding that additional call for those that use it responsibly(tongue firmly in cheek).

But seriously, having a glVertexAttribFormatSize would be nice, like a cherry on a sundae [and this is what the NVIDIA bindless interface does as well].

The same buffer can be bound to different indexes and with different strides for each though; the API doesn’t actually prevent that.