PDA

View Full Version : GL_ARRAY_BUFFER with glBindBufferBase



Groovounet
11-01-2010, 07:26 AM
Hi,

I'm still looking forward to a way to change independently the array buffer from the vertex layout. (GL_XXX_vertex_array_buffer_bindings :D)

A very easy way to make possible is to allows to the target GL_ARRAY_BUFFER with glBindBufferBase.


void glBindBufferBase(GLenum target, GLuint index, GLuint buffer);

void glVertexAttribPointer(
GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);

Each array buffer is associated with glVertexAttribPointer vertex array format using the "index" slot.

void glBindBufferBase(GL_ARRAY_BUFFER, 0, Buffer) is equivalent to void glBindBuffer(GL_ARRAY_BUFFER, Buffer).

The current behavior and the new behavior need to co-exist for backward compatibility so that we need a way select the behavior.

One option is to reuse glEnableClientState and to create a new value to enable GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDINGS. This strategy has a precedent with glEnableClientState(GL_VERTEX_ATTRIB_ARRAY_UNIFIED _NV) from nVidia bindless graphics.
Another option is to create a new function glVertexArrayParameteri and a new boolean value GL_VERTEX_ARRAY_BUFFER_BINDINGS, default value being GL_FALSE.

It would be the occasion to bring DSA for vertex arrays into code.

Expected use:

glGenVertexArrays(1, VertexArray);
glBindVertexArrays(VertexArray);
glVertexArrayParameteri(VertexArray, GL_VERTEX_ARRAY_BUFFER_BINDINGS, GL_TRUE);
glVertexAttribPointer(
Index0,
Size0,
Type0,
GL_FALSE,
Stride0,
Offset0);
glVertexAttribPointer(
Index1,
Size1,
Type1,
GL_FALSE,
Stride1,
Offset1);
glBindVertexArrays(0);

glBindBufferBase(GL_ARRAY_BUFFER, 0, Buffer0);
glBindBufferBase(GL_ARRAY_BUFFER, 1, Buffer1);

The DSA version:


glGenVertexArrays(1, VertexArray);
glVertexArrayParameteri(VertexArray, GL_VERTEX_ARRAY_BUFFER_BINDINGS, GL_TRUE);
glVertexArrayVertexAttrib*Offset(
VertexArray,
Index0,
GLint size0,
GLenum type0,
GLsizei stride0,
GLintptr offset0);
glVertexArrayVertexAttrib*Offset(
VertexArray,
Index1,
GLint size1,
GLenum type1,
GLsizei stride1,
GLintptr offset1);

glBindBufferBase(GL_ARRAY_BUFFER, 0, Buffer0);
glBindBufferBase(GL_ARRAY_BUFFER, 1, Buffer1);


Buffer0 and Buffer1 could refer to the same buffer name.

kRogue
11-03-2010, 03:17 PM
First a disclaimer (and warning about my opinions) before one reads more:

I do not like having API functions that do different things based off of state set else where I prefer for a function to do one thing (and clearly) than to do many different things.

With that warning out of the way, I feel beyond uneasy about adding new functionality to an existing API point, together with a state bit changing behavior.

My take on the issue is the following. Introduce two (I guess really 3) new functions:


/*!
Sets the source for a vertex attribute.

\param index vertex attribute index, as in glVertexAttribPointer
\param buffer_object name of buffer object to source from, 0 indicates to
source from client memory (for those compatibility profile moments in our lives)
\param offset offset into buffer object to start sourcing data, i.e. same role as the last argument to glVertexAttribPointer
*/
glVertexAttribSource(GLint index, GLuint buffer_object, const GLvoid *offset);

/*!
Sets the format for a vertex attribute
\param index vertex attribute index, as in glVertexAttribPointer
\param size number of components specified, same meaning as in glVertexAttribPointer
\param type GL enumeration for type of data, same meaning as in glVertexAttribPointer
\param normalized if type specifies an integer type, if GL_TRUE, then values are normalized, same meaning as in glVertexAttribPointer
\param stride distance in bytes between elements, same meaning as in glVertexAttribPointer
*/
glVertexAttribFormat(GLint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride);

//also:
glVertexAttribFormatI(GLint index, GLint size, GLenum type, GLsizei stride);



and then the spec would read that glVertexAttribPointer(index, size, type, normalized, stride, ptr) is equivalent to:



GLint buffer_object;

glGetIntegerv(GL_ARRAY_BUFFER_BINDING, &buffer_object);
glVertexAttribSource(index, buffer_object, ptr);
glVertexAttribFormat(index, size, type, normalized, stride);


works seamlessly (as far as I can tell) no extra state and advertises that two things are going on: setting source and setting format... also avoid the ick thing of "bind to use".

Groovounet
11-05-2010, 07:01 AM
I have been submitting this idea several time in the past since OpenGL 3.0 release just with different implementations.

I quite prefer your implementation as well and I believe I submit something similar onces. This new proposal was based on a minimum API change, something that the ARB usually try to do...

Oh well, the main thing is to keep this topic active ;)

kRogue
11-06-2010, 01:24 AM
There is an issues I see with my suggestion:
If you just want to change the buffer offset and not the buffer object

Alfonse Reinheart
11-06-2010, 02:00 AM
If you just want to change the buffer offset and not the buffer object

Is that a problem? The implementation is going to have to check the buffer object's length to see if the offset is valid anyway. So you may as well just pass the buffer object itself along.

CrazyButcher
11-06-2010, 05:38 AM
very good proposals, we really need something along nvidia's bindless graphics, where we can do lightweight changes of pointers (buffer + offset).

kRogue's functions would be ideal, but Groovounet's fits better with what we have (fixup to vertex_array_object, which is somewhat pointless in its current state)

aqnuep
11-10-2010, 04:44 AM
I would really go with kRogue's proposal.
That would be not just more convenient but also would reflect how the underlying hardware works. Changing the buffer pointers (glVertexAttribSource) is a lightweight operation that could happen often. On the other hand, changing the format settings (glVertexAttribFormat) is a heavier operation and this way it is explicitly mentioned.
AFAIK actual drivers do infact make lightweight operations from glVertexAttribPointer calls if they observe the actual format did not changed, however this is just additional burden to driver developers and leaves application developers in the state that they can just assume that pointer-only changes will be optimized to be lightweight operations.

Groovounet
11-10-2010, 05:26 AM
Saying that changing the buffer pointer is lightweight is maybe a be optimistic and I would actually say that they have both their cost.

I would actually say that VAO as they are today is a burden to driver developers and a burden to graphics programmers because the way hardware implement (separated pointers and vertex layout) it is actually how it fit the graphics programmers.

The trouble for me with glVertexAttribSource and glVertexAttribFormat is that is doesn't really decouple the data from the vertex layout and hence doesn't really provide a guide line saying "Sorting per vertex layout is good". Moreover using glBindBufferBase allows to consider the VAO as a static object instead of a mutable object... Sound better to me be probably "who cares?"

Anyway, the ARB is certainly able to design the right API for it, I only want to make sure that this topic remains a community concern! ;)

aqnuep
11-10-2010, 06:04 AM
Anyway, the ARB is certainly able to design the right API for it, I only want to make sure that this topic remains a community concern! ;)

It definitely is, however I myself rather go into the other direction and instead of optimizing vertex array config changes I'm more going for eliminate the need for vertex array config changes. Of course, this is sometimes not so simple but texture buffer objects can help sometimes...

Groovounet
11-10-2010, 08:31 AM
optimizing vertex array config changes I'm more going for eliminate the need for vertex array config changes

I actually think that sorting "per vertex layout" or "per set of vertex buffers" are both reasonable case for various scenarios depending on renderer design choices and buffer memory management. For a budgeted vertex array memory, a big buffer, a cleaver memory management based on data streaming with glMapBufferRange might do very well with updating more often vertex layout but this is already some sort of possible even if this proposal improved it slightly.

On other hand, a lot of different array buffer sets, sharing the same vertex-layout is just an as common as possible case.