Vertex Specification

From OpenGL Wiki
(Redirected from Vertex Buffer Objects)
Jump to navigation Jump to search

Vertex Specification is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.

Theory[edit]

Submitting vertex data for rendering requires creating a stream of vertices, and then telling OpenGL how to interpret that stream.

Vertex Stream[edit]

In order to render at all, you must be using a shader program or program pipeline which includes a Vertex Shader. The VS's user-defined input variables defines the list of expected Vertex Attributes for that shader, where each attribute is mapped to each user-defined input variable. This set of attributes defines what values the vertex stream must provide to properly render with this shader.

For each attribute in the shader, you must provide an array of data for that attribute. All of these arrays must have the same number of elements. Note that these arrays are a bit more flexible than C arrays, but overall work the same way.

The order of vertices in the stream is very important; this order defines how OpenGL will process and render the Primitives the stream generates. There are two ways of rendering with arrays of vertices. You can generate a stream in the array's order, or you can use a list of indices to define the order. The indices control what order the vertices are received in, and indices can specify the same array element more than once.

Let's say you have the following array of arrays containing 3d position data belonging to 3 vertices:

 { {1, 1, 1}, {0, 0, 0}, {0, 0, 1} }

If you use the above array as a stream, OpenGL will receive and process these three vertices in order (left-to-right). However, you can also specify another list of indices that will select which vertices to use and in which order.

Let's say we have the following index list:

 {2, 1, 0, 2, 1, 2}

If we render with the above attribute array, but selected by the index list, OpenGL will receive the following stream of vertex attribute data:

 { {0, 0, 1}, {0, 0, 0}, {1, 1, 1}, {0, 0, 1}, {0, 0, 0}, {0, 0, 1} }

The index list is a way of reordering the vertex attribute array data without having to actually change it. This is mostly useful as a means of data compression; in most tight meshes, vertices are used multiple times. Being able to store the vertex attributes for that vertex only once is very economical, as a vertex's attribute data is generally around 32 bytes, while indices are usually 2-4 bytes in size.

A vertex stream can of course have multiple attributes. You can take the above position array and augment it with, for example, a texture coordinate array:

 { {0, 0}, {0.5, 0}, {0, 1} }

The vertex stream you get will be as follows:

 { [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{1, 1, 1}, {0, 0}], [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{0, 0, 1}, {0, 1}] }
Note: Oftentimes, authoring tools will have attribute arrays, but each attribute array will have its own separate index array. This is done to make each attribute's array smaller. OpenGL (and Direct3D, if you're wondering) does not allow this. Only one index array can be used, and each attribute array is indexed with the same index. If your mesh data has multiple index arrays, you must convert the format exported by your authoring tool into the format described above.

Primitives[edit]

The above stream is not enough to actually draw anything; you must also tell OpenGL how to interpret this stream. And this means telling OpenGL what kind of primitive to interpret the stream as.

There are many ways for OpenGL to interpret a stream of, for example, 12 vertices. It can interpret the vertices as a sequence of triangles, points, or lines. It can even interpret these differently; it can interpret 12 vertices as 4 independent triangles (take every 3 verts as a triangle), as 10 dependent triangles (every group of 3 sequential vertices in the stream is a triangle), and so on.

The main article on Primitives has the details.

Vertex Array Object[edit]

Vertex Array Object
Core in version 4.6
Core since version 3.0
Core ARB extension ARB_vertex_array_object

A Vertex Array Object (VAO) is an OpenGL Object that stores all of the state needed to supply vertex data (with one minor exception noted below). It stores the format of the vertex data as well as the Buffer Objects (see below) providing the vertex data arrays. Note that a VAO merely references the buffers, it does not copy or freeze their contents; if referenced buffers are modified later, those changes will be seen when using the VAO.

As OpenGL Objects, VAOs have the usual creation, destruction, and binding functions: glGenVertexArrays, glDeleteVertexArrays, and glBindVertexArray. The latter is different, in that there is no "target" parameter; there is only one target for VAOs, and glBindVertexArray binds to that target.

Note: VAOs cannot be shared between OpenGL contexts.

Vertex attributes are numbered from 0 to GL_MAX_VERTEX_ATTRIBS - 1. Each attribute can be enabled or disabled for array access. When an attribute's array access is disabled, any reads of that attribute by the vertex shader will produce a constant value (see below) instead of a value pulled from an array.

A newly-created VAO has array access disabled for all attributes. Array access is enabled by binding the VAO in question and calling:

void glEnableVertexAttribArray(GLuint index​);

There is a similar glDisableVertexAttribArray function to disable an enabled array.

Remember: all of the state below is part of the VAO's state, except where it is explicitly stated that it is not. A VAO must be bound when calling any of those functions, and any changes caused by these function will be captured by the VAO.

The compatibility OpenGL profile makes VAO object 0 a default object. The core OpenGL profile makes VAO object 0 not an object at all. So if VAO 0 is bound in the core profile, you should not call any function that modifies VAO state. This includes binding the `GL_ELEMENT_ARRAY_BUFFER` with glBindBuffer.

Vertex Buffer Object[edit]

A Vertex Buffer Object (VBO) is the common term for a normal Buffer Object when it is used as a source for vertex array data. It is no different from any other buffer object, and a buffer object used for Transform Feedback or asynchronous pixel transfers can be used as source values for vertex arrays.

There are two ways to use buffer objects as the source for vertex data. This section describes the combined format method. A method that separates the format specification from buffers is described below. The two are functionally equivalent, but the separate method is easier to use and understand; however, it requires OpenGL 4.3 or ARB_vertex_attrib_binding.

The format and source buffer for an attribute array can be set by doing the following. First, the buffer that the attribute data comes from must be bound to GL_ARRAY_BUFFER.

Note: A call to glBindBuffer to set the GL_ARRAY_BUFFER binding is NOT modifying the current VAO's state!

Once the buffer is bound, call one of these functions:

 void glVertexAttribPointer( GLuint index​, GLint size​, GLenum type​,
   GLboolean normalized​, GLsizei stride​, const void *offset​);
 void glVertexAttribIPointer( GLuint index​, GLint size​, GLenum type​,
   GLsizei stride​, const void *offset​ );
 void glVertexAttribLPointer( GLuint index​, GLint size​, GLenum type​,
   GLsizei stride​, const void *offset​ );

All of these functions do more or less the same thing: setting the format and buffer storage information for attribute index index​. The difference between them will be discussed later. Note that the last function is only available on OpenGL 4.1 or ARB_vertex_attrib_64bit.

These functions say that the attribute index index​ will get its attribute data from whatever buffer object is currently bound to GL_ARRAY_BUFFER. It is vital to understand that this association is made when this function is called. For example, let's say we do this:

glBindBuffer(GL_ARRAY_BUFFER, buf1);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);

The first line binds buf1 to the GL_ARRAY_BUFFER binding. The second line says that attribute index 0 gets its vertex array data from buf1, because that's the buffer that was bound to GL_ARRAY_BUFFER when the glVertexAttribPointer was called.

The third line binds the buffer object 0 to the GL_ARRAY_BUFFER binding. What does this do to the association between attribute 0 and buf1?

Nothing! Changing the GL_ARRAY_BUFFER binding changes nothing about vertex attribute 0. Only calls to glVertexAttribPointer can do that.

Think of it like this. glBindBuffer sets a global variable, then glVertexAttribPointer reads that global variable and stores it in the VAO. Changing that global variable after it's been read doesn't affect the VAO. You can think of it that way because that's exactly how it works.

This is also why GL_ARRAY_BUFFER is not VAO state; the actual association between an attribute index and a buffer is made by glVertexAttribPointer.

Note that it is an error to call the glVertexAttribPointer functions if 0 is currently bound to GL_ARRAY_BUFFER.

Vertex format[edit]

The glVertexAttribPointer functions state where an attribute index gets its array data from. But it also defines how OpenGL should interpret that data. Thus, these functions conceptually do two things: set the buffer object information on where the data comes from and define the format of that data.

The format parameters describe how to interpret a single vertex of information from the array. Vertex Attributes in the Vertex Shader can be declared as a floating-point GLSL type (such as float or vec4), an integral type (such as uint or ivec3), or a double-precision type (such as double or dvec4). Double-precision attributes are only available in OpenGL 4.1 or ARB_vertex_attrib_64bit.

The general type of attribute used in the vertex shader must match the general type provided by the attribute array. This is governed by which glVertexAttribPointer function you use. For floating-point attributes, you must use glVertexAttribPointer. For integer (both signed and unsigned), you must use glVertexAttribIPointer. And for double-precision attributes, where available, you must use glVertexAttribLPointer.

Each attribute index represents a vector of some type, from 1 to 4 components in length. The size​ parameter of the glVertexAttribPointer functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4. Note that size​ does not have to exactly match the size used by the vertex shader. If the vertex shader has fewer components than the attribute provides, then the extras are ignored. If the vertex shader has more components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the missing XYZW components.

The latter is not true for double-precision inputs (OpenGL 4.1 or ARB_vertex_attrib_64bit). If the shader attribute has more components than the provided value, the extra components will have undefined values.

Component type[edit]

The type of the vector component in the buffer object is given by the type​ and normalized​ parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different glVertexAttribPointer functions take different type​s. Here is a list of the types and their meanings for each function:

glVertexAttribPointer:

  • Floating-point types. normalized​ must be GL_FALSE
    • GL_HALF_FLOAT​: A 16-bit half-precision floating-point value. Equivalent to GLhalf.
    • GL_FLOAT​: A 32-bit single-precision floating-point value. Equivalent to GLfloat.
    • GL_DOUBLE​: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to GLdouble.
    • GL_FIXED​: A 16.16-bit fixed-point two's complement value. Equivalent to GLfixed.
  • Integer types; these are converted to floats automatically. If normalized​ is GL_TRUE, then the value will be converted to a float via integer normalization (an unsigned byte value of 255 becomes 1.0f). If normalized​ is GL_FALSE, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f, regardless of the size of the integer).
    • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte.
    • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte.
    • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort.
    • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort.
    • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint.
    • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint.
    • GL_INT_2_10_10_10_REV​: A series of four values packed in a 32-bit unsigned integer. Each individual packed value is a two's complement signed integer, but the overall bitfield is unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_10F_11F_11F_REV: Requires OpenGL 4.4 or ARB_vertex_type_10f_11f_11f_rev. This represents a 3-element vector of floats, packed into a 32-bit unsigned integer. The bitdepth for the packed fields is 10, 11, 11, but in reverse order. So the lowest 11 bits are the first component, the next 11 are the second, and the last 10 are the third. These floats are the low bitdepth floats, packed exactly like the image format GL_R11F_G11F_B10F. If you use this, the size​ must be 3.

glVertexAttribIPointer: This function only feeds attributes declared in GLSL as signed or unsigned integers, or vectors of the same.

  • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte.
  • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte.
  • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort.
  • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort.
  • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint.
  • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint.

glVertexAttribLPointer: This function only feeds attributes declared in GLSL as double or vectors of the same.

  • GL_DOUBLE: A 64-bit double-precision float value. Equivalent to GLdouble.

Here is a visual demonstration of the ordering of the 2_10_10_10_REV types:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              Z              |              Y              |               X            |
-----------------------------------------------------------------------------------------------

Here is a visual demonstration of the ordering of the 10F_11F_11F_REV type:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|             Z              |              Y                 |               X               |
-----------------------------------------------------------------------------------------------

D3D compatibility[edit]

D3D Compatible Format
Core in version 4.6
Core since version 3.2
Core ARB extension ARB_vertex_array_bgra

When using glVertexAttribPointer, and only this function (not the other forms), the size​ field can be a number 1-4, but it can also be GL_BGRA.

This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components.

This special mode is intended specifically for compatibility with certain Direct3D vertex formats. Because of that, this special size​ can only be used in conjunction with:

  • type​ must be GL_UNSIGNED_BYTE, GL_INT_2_10_10_10_REV​ or GL_UNSIGNED_INT_2_10_10_10_REV​
  • normalized​ must be GL_TRUE

So you cannot pass non-normalized values with this special size​.

Note: This special mode should only be used if you have data that is formatted in D3D's style and you need to use it in your GL application. Don't bother otherwise; you will gain no performance from it.

Here is a visual description:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              X              |              Y              |               Z            |
-----------------------------------------------------------------------------------------------

Notice how X comes second and the Z last. X is equivalent to R and Z is equivalent to B, so it comes in the reverse of BGRA order: ARGB.

Vertex buffer offset and stride[edit]

The vertex format information above tells OpenGL how to interpret the data. The format says how big each vertex is in bytes and how to convert it into the values that the attribute in the vertex shader receives.

But OpenGL needs two more pieces of information before it can find the data. It needs a byte offset from the start of the buffer object to the first element in the array. So your arrays don't always have to start at the front of the buffer object. It also needs a stride, which represents how many bytes it is from the start of one element to the start of another.

The offset​​ defines the buffer object offset. Note that it is a parameter of type const void * rather than an integer of some kind. This is in part why it's called glVertexAttribPointer, due to old legacy stuff where this was actually a client pointer.

So you will need to cast the integer offset into a pointer. In C, this is done with a simple cast: (void*)(byteOffset). In C++, this can be done as such: reinterpret_cast<void*>(byteOffset).

The stride​ is used to decide if there should be bytes between vertices. If it is set to 0, then OpenGL will assume that the vertex data is tightly packed. So OpenGL will compute the stride from the given other components. So if you set the size​ to be 3, and type​ to be GL_FLOAT, OpenGL will compute a stride of 12 (4 bytes per float, and 3 floats per attribute).

Interleaved attributes[edit]

The main purpose of the stride​ attribute is to allow interleaving between different attributes. This is conceptually the difference between these two C++ definitions:

struct StructOfArrays
{
  GLfloat positions[VERTEX_COUNT * 3];
  GLfloat normals[VERTEX_COUNT * 3];
  GLubyte colors[VERTEX_COUNT * 4];
};

StructOfArrays structOfArrays;

struct Vertex
{
  GLfloat position[3];
  GLfloat normal[3];
  Glubyte color[4];
};

Vertex vertices[VERTEX_COUNT];

structOfArrays is a struct that contains several arrays of elements. Each array is tightly packed, but independent of one another. vertices is a single array, where each element of the array is an independent vertex.

If we have a buffer object which has had vertices uploaded to it, such that baseOffset is the byte offset to the start of this data, we can use the stride​ parameter to allow OpenGL to access it:

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, position)));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, normal)));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, color)));

Note that each attribute uses the same stride: the size of the Vertex struct. C/C++ requires that the size of this struct be padded where appropriately such that you can get the next element in an array by adding that size in bytes to a pointer (ignoring pointer arithmetic, which will do all of this for you). Thus, the size of the Vertex structure is exactly the number of bytes from the start of one element to another, for each attribute.

The macro offsetof computes the byte offset of the given field in the given struct. This is added to the baseOffset, so that each field points to the start of its own data relative to the beginning of the Vertex structure.

As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with those that change at the same time.

Index buffers[edit]

Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object bound to the GL_ELEMENT_ARRAY_BUFFER binding point. When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, all drawing commands of the form gl*Draw*Elements* will use indexes from that buffer. Indices can be unsigned bytes, unsigned shorts, or unsigned ints.

The index buffer binding is stored within the VAO. If no VAO is bound, then you cannot bind a buffer object to GL_ELEMENT_ARRAY_BUFFER.

Instanced arrays[edit]

Instanced arrays
Core in version 4.6
Core since version 3.3
ARB extension ARB_instanced_arrays

Normally, vertex attribute arrays are indexed based on the index buffer, or when doing array rendering, once per vertex from the start point to the end. However, when doing instanced rendering, it is often useful to have an alternative means of getting per-instance data than accessing it directly in the shader via a Uniform Buffer Object, a Buffer Texture, or some other means.

It is possible to have one or more attribute arrays indexed, not by the index buffer or direct array access, but by the instance count. This is done via this function:

void glVertexAttribDivisor(GLuint index​, GLuint divisor​);

The index​ is the attribute index to set. If divisor​ is zero, then the attribute acts like normal, being indexed by the array or index buffer. If divisor​ is non-zero, then the current instance is divided by this divisor, and the result of that is used to access the attribute array.

The "current instance" mentioned above starts at the base instance for instanced rendering, increasing by 1 for each instance in the draw call. Note that this is not how the gl_InstanceID is computed for Vertex Shaders; that is not affected by the base instance. If no base instance is specified, then the current instance starts with 0.

This is generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method in some respects. OpenGL implementations usually offer a fairly restricted number of vertex attributes (16 or so), and you will need some of these for the actual per-vertex data. So that leaves less room for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller.

However, that should be plenty for a quaternion orientation and a position, for a simple transformation. That would even leave one float (the position only needs to be 3D) to provide a fragment shader an index to access an Array Texture.

Separate attribute format[edit]

Separate attribute format
Core in version 4.6
Core since version 4.3
Core ARB extension ARB_vertex_attrib_binding

glVertexAttribPointer and its variations are nice, but they unify two separate concepts into one function (from a hardware perspective): the vertex format for an attribute array, and the source data for that array. These concepts can be separated, allowing the user to separately specify the format of a vertex attribute from the source buffer. This also makes it easy to change the buffer binding for multiple attributes, since different attributes can pull from the same buffer location.

This separation is achieved by splitting the state into two pieces: a number of vertex buffer binding points, and a number of vertex format records.

The buffer binding points aggregate the following data:

  • The source buffer object.
  • The base byte offset into the buffer object for all vertex attributes that pull data from this binding point.
  • The byte stride for all vertex attributes that pull data from this binding point.
  • The instance divisor, which is used for all vertex attributes that pull data from this binding point.

The vertex format consists of:

  • Which attributes are enabled/disabled (still controlled by glEnableVertexAttribArray.
  • The size, type and normalization of the vertex attribute data.
  • The buffer binding point it is associated with.
  • A byte offset from the base offset of its associated buffer binding point to where it's vertex data starts.

The functions that set the buffer binding point data are:

void glBindVertexBuffer(GLuint bindingindex​, GLuint buffer​, GLintptr offset​, GLintptr stride​);

void glVertexBindingDivisor(GLuint bindingindex​, GLuint divisor​);

glBindVertexBuffer is kind of like glBindBufferRange, but it is specifically intended for vertex buffer objects. The bindingindex​ is, as the name suggests, not a vertex attribute. It is a binding index, which can range from 0 to GL_MAX_VERTEX_ATTRIB_BINDINGS - 1. This will almost certainly be 16.

buffer​ is the buffer object that is being bound to this binding index. Note that there is no need to bind the buffer to GL_ARRAY_BUFFER; the function takes the buffer object directly. offset​ is a byte offset from the beginning of the buffer to where all of the associated attached buffer object data begins. stride​ is the byte offset from one vertex to the next.

Notice that the stride is uncoupled from the vertex format itself here. Also, a stride​ of 0 no longer tells OpenGL to automatically compute the stride. Since OpenGL doesn't know the data's format, it cannot compute the stride. So a stride of 0 really means a stride of 0.

In OpenGL 4.4, stride​ is prevented from being larger than GL_MAX_VERTEX_ATTRIB_STRIDE. This is an implementation-defined limitation, but it will be no less than 2048.

Warning: If you're only using OpenGL 4.3, you are advised to observe this limitation anyway, even though you don't strictly have to and can't query it. Just keep your strides under 2048, which shouldn't be a problem for normal vertex data arrangements.

glVertexBindingDivisor is much like glVertexAttribDivisor, except applied to a binding index instead of an attribute index. All vertex attributes associated with this binding index will use the same divisor.

The functions that affect vertex attribute formats are:

void glVertexAttribFormat(GLuint attribindex​, GLint size​, GLenum type​, GLboolean normalized​, GLuint relativeoffset​);

void glVertexAttribIFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

void glVertexAttribLFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

The glVertexAttribFormat functions work similarly to their glVertexAttribPointer counterparts (it even takes GL_BGRA for the size in the same way as the original). attribindex​ is, as the name suggests, an actual attribute index, from 0 to GL_MAX_VERTEX_ATTRIBS​ - 1. size​, type​, and normalized​ all work as before.

relativeoffset​ is new. Vertex formats are associated with vertex buffer bindings from glBindVertexBuffer. So every vertex format that uses the same vertex buffer binding will use the same buffer object and the same offset. In order to allow interleaving (where different attributes need to offset themselves from the base offset), relativeoffset​ is used. It is effectively added to the buffer binding's offset to get the offset for this attribute.

Note that relativeoffset​ has much more strict limits than the buffer binding's offset​. The limit on relativeoffset​ is queried through GL_MAX_VERTEX_ATTRIB_RELATIVE_OFFSET, and is only guaranteed to be at least 2047 bytes. Also, note that relativeoffset​ is a GLuint (32-bits), while offset​ is a GLintptr, which is the size of the pointer (so 64-bits in a 64-bit build). So obviously the relativeoffset​ is a much more limited quantity.

To associate a vertex attribute with a buffer binding, use this function:

void glVertexAttribBinding(GLuint attribindex​, GLuint bindingindex​);

The attribindex​ will use the buffer, offset, stride, and divisor, from bindingindex​.

Note that you still have to enable attribute arrays; this feature doesn't change that fact. It only changes the need to use glVertexAttribPointer.

This can be bit confusing, but it makes a lot more sense than the glVertexAttribPointer method once you see it. The simplest way is to go back to the Vertices example from the interleaving section. We have this struct of vertex data:

struct Vertex
{
  GLfloat position[3];
  GLfloat normal[3];
  Glubyte color[4];
};
 
Vertex vertices[VERTEX_COUNT];

Using glVertexAttribPointer, we bound this data like this:

glBindBuffer(GL_ARRAY_BUFFER, buff);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, position)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, normal)));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(Vertex, color)));

Now, here is how we would do it using the new APIs:

glBindVertexBuffer(0, buff, baseOffset, sizeof(Vertex));

glEnableVertexAttribArray(0);
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, position));
glVertexAttribBinding(0, 0);
glEnableVertexAttribArray(1);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(Vertex, normal));
glVertexAttribBinding(1, 0);
glEnableVertexAttribArray(2);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(Vertex, color));
glVertexAttribBinding(2, 0);

That's much clearer. The base offset to the beginning of the vertex data is very clear, as is the offset from this base to the start of each attribute. Better yet, if you want to use the same format but move the buffer around, it only takes one function call; namely glBindVertexBuffer with a buffer binding of 0.

Indeed, if lots of vertices use the same format, you can interleave them in the same way and only ever change the source buffer. This separation of buffer/stride/offset from vertex format can be a powerful optimization.

Note again that all of the above state is still VAO state. It is all encapsulated in vertex array objects.

Because all of this modifies how vertex attribute state works, glVertexAttribPointer is redefined in terms of this new division. It is defined as follows

void glVertexAttrib*Pointer(GLuint index, GLint size, GLenum type, {GLboolean normalized,} GLsizei stride, const GLvoid * pointer)
{
  glVertexAttrib*Format(index, size, type, {normalized,} 0);
  glVertexAttribBinding(index, index);

  GLuint buffer;
  glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);

  if(buffer == 0)
    glErrorOut(GL_INVALID_OPERATION); // Give an error.

  if(stride == 0)
    stride = CalcStride(size, type);

  GLintptr offset = reinterpret_cast<GLintptr>(pointer);
  glBindVertexBuffer(index, buffer, offset, stride);
}

Where CalcStride does what it sounds like. Note that glVertexAttribPointer does use the same index for the attribute format and the buffer binding. So calling it will overwrite anything you may have set into these bindings.

Similarly, glVertexAttribDivisor is defined as:

void glVertexAttribDivisor(GLuint index, GLuint divisor)
{
  glVertexAttribBinding(index, index);
  glVertexBindingDivisor(index, divisor);
}

So again, calling it will overwrite your vertex attribute format binding.

Multibind and separation[edit]

Object Multi-bind
Core in version 4.6
Core since version 4.4
Core ARB extension ARB_multi_bind

The separation of attribute formats from the buffers that contain storage for them is a powerful mechanism. However, it is often useful as a developer to maintain the same format while quickly switching between multiple buffers to pull data from. To achieve this, the following function is available:

void glBindVertexBuffers(GLuint first​, GLsizei count​, const GLuint *buffers​, const GLuintptr *offsets​, const GLsizei *strides​);

This function is mostly equivalent to calling glBindVertexBuffer (note the lack of the "s" at the end) on all count​ elements of the buffers​, offsets​, and strides​ arrays. Each time, the buffer binding index is incremented, starting at first​.

The difference between such a loop are as follows. buffers​ can be NULL; if it is, then the function will completely ignore offsets​ and strides​ as well. Instead, it will simply bind 0 to every buffer binding index specified by first​ and count​.

Matrix attributes[edit]

Attributes in GLSL can be of matrix types. However, our attribute binding functions only bind up to a dimensionality of 4. OpenGL solves this problem by converting matrix GLSL attributes into multiple attribute indices.

If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. The number of attributes a matrix takes up depends on the number of columns of the matrix: a mat2 matrix will take 2, a mat2x4 matrix will take 2, while a mat4x2 will take 4. The size of each attribute is the number of rows of the matrix.

Each bound attribute in the VAO therefore fills in a single column, starting with the left-most and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will naturally take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the first column, 4 is the second, and 5 is the last.

OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.

Double-precision matrices (where available) will take up twice as much space. So a dmat3x3 will take up 6 attribute indices, two for each column.

Non-array attribute values[edit]

A vertex shader can read an attribute that is not currently enabled (via glEnableVertexAttribArray). The value that it gets is defined by special context state, which is *not* part of the VAO.

Because the attribute is defined by context state, it is constant over the course of a single draw call. Each attribute index has a separate value.

Warning: Every time you issue a drawing command with an array enabled, the corresponding context attribute values become undefined. So if you want to, for example, use the non-array attribute index 3 after previously using an array in index 3, you need to repeatedly reset it to a known value.

The initial value for these is a floating-point (0.0, 0.0, 0.0, 1.0). Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).

To change the value, you use a function of this form:

 void glVertexAttrib*(GLuint index​, Type values​);
 void glVertexAttribN*(GLuint index​, Type values​);
 void glVertexAttribP*(GLuint index​, GLenum type​, GLboolean normalized​, Type values​);
 void glVertexAttribI*(GLuint index​, Type values​);
 void glVertexAttribL*(GLuint index​, Type values​);

The * is the type descriptor, using the traditional OpenGL syntax. The index​ is the attribute index to set. The Type is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes. And just as for attributes provided by arrays, double-precision inputs (GL 4.1 or ARB_vertex_attrib_64bit) that having more components than provided leaves the extra components with undefined values.

The N version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The P versions are for packed integer types, and they can be normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.

To provide non-array integral values for integral attributes, use the I versions. For double-precision attributes (using the same rules for attribute index counts as double-precision arrays), use L.

Note that these non-array attribute values are not part of the VAO state; they are context state. Changes to them do not affect the VAO.

Note: It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for. They might be faster than uniforms, or they might not.

Drawing[edit]

Once the VAO has been properly set up, the arrays of vertex data can be rendered as a Primitive. OpenGL provides innumerable different options for rendering vertex data.

See Also[edit]

Reference[edit]