Vertex Specification

Revision as of 18:09, 23 September 2012 by Alfonse (Talk | contribs) (Matrix attributes)

Jump to: navigation, search

Vertex Specification is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.


Submitting vertex data for rendering requires creating a stream of vertices, and then telling OpenGL how to interpret that stream.

Vertex Stream

In order to render at all, you must be using a shader program. This program has a list of expected Vertex Attributes. This set of attributes determines what attribute values you must send in order to properly render with this shader.

For each attribute in the shader, you must provide a list of data for that attribute. All of these lists must have the same number of elements.

The order of vertices in the stream is very important; it determines how OpenGL will render your mesh. The order of the stream can either be the order of data in the arrays, or you can specify a list of indices. The indices control what order the vertices are received in, and indices can specify the same vertex more than once.

Let's say you have the following as your array of 3d position data:

 { {1, 1, 1}, {0, 0, 0}, {0, 0, 1} }

If you simply use this as a stream as is, OpenGL will receive and process these three vertices in order (left-to-right). However, you can also specify a list of indices that will select which vertices to use and in which order.

Let's say we have the following index list:

 {2, 1, 0, 2, 1, 2}

If we render with the above attribute array, but selected by the index list, OpenGL will receive the following stream of vertex attribute data:

 { {0, 0, 1}, {0, 0, 0}, {1, 1, 1}, {0, 0, 1}, {0, 0, 0}, {0, 0, 1} }

The index list is a way of reordering the vertex attribute array data without having to actually change it. This is mostly useful as a means of data compression; in most tight meshes, vertices are used multiple times. Being able to store the vertex attributes for that vertex only once is very economical, as a vertex's attribute data is generally around 32 bytes, while indices are usually 2-4 bytes in size.

A vertex stream can of course have multiple attributes. You can take the above position array and augment it with, for example, a texture coordinate array:

 { {0, 0}, {0.5, 0}, {0, 1} }

The vertex stream you get will be as follows:

 { [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{1, 1, 1}, {0, 0}], [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{0, 0, 1}, {0, 1}] }
Note: Oftentimes, authoring tools will have similar attribute arrays, but the sizes will be different. These tools give each attribute array a separate index list; this makes each attribute list smaller. OpenGL (and Direct3D, if you're wondering) does not allow this. Each attribute array must be the same size, and each index corresponds to the same location in each attribute array.. You must manually convert the format exported by your authoring tool into the format described above.


The above stream is not enough to actually get anything; you must tell OpenGL how to interpret this stream in order to get proper rendering. And this means telling OpenGL what kind of primitive to interpret the stream as.

There are many ways for OpenGL to interpret a stream of 12 vertices. It can interpret the vertices as a sequence of triangles, points, or lines. It can even interpret these differently; it can interpret 12 vertices as 4 independent triangles (take every 3 verts as a triangle), as 10 dependent triangles (every group of 3 sequential vertices in the stream is a triangle), and so on.

The main article on this subject has the details.

Now that we understand the theory, let's look at how it is implemented in OpenGL. Vertex data is provided to OpenGL as arrays. Thus, OpenGL needs two things: the arrays themselves and a description of how to interpret the bytes of those arrays.

Vertex Array Object

Vertex Array Object
Core in version 4.5
Core since version 3.0
Core ARB extension ARB_vertex_array_object

A Vertex Array Object (VAO) is an OpenGL Object that encapsulates all of the state needed to specify vertex data (with one minor exception noted below). They define the format of the vertex data as well as the sources for the vertex arrays. Note that VAOs do not contain the arrays themselves; the arrays are stored in Buffer Objects (see below). The VAOs simply reference already existing buffer objects.

As OpenGL Objects, VAOs have the usual creation, destruction, and binding functions: glGenVertexArrays, glDeleteVertexArrays, and glBindVertexArray. The latter is different, in that there is no "target" parameter; there is only one target for VAOs, and glBindVertexArray binds to that target.

Note: VAOs cannot be shared between OpenGL contexts.

Vertex attributes are numbered from 0 to GL_MAX_VERTEX_ATTRIBS - 1. Each attribute array can be enabled for array access or disabled. When an attribute array is disabled, any attempts by the vertex shader to read from that attribute will produce a constant value (see below) instead of a value pulled from an array.

A newly-created VAO has all of the arrays disabled. Arrays are enabled by binding the VAO in question and calling:

void glEnableVertexAttribArray(GLuint index​);

There is a similar glDisableVertexAttribArray function to disable an enabled array.

Remember: all of the state below is part of the VAO's state (except where explicitly stated that it is not). Thus, all of the state below is captured by the VAO.

Vertex Buffer Object

A Vertex Buffer Object (VBO) is a Buffer Object which is used as the source for vertex array data. It is no different from any other buffer object, and a buffer object used for Transform Feedback or asynchronous pixel transfers can be used as source values for vertex arrays.

The format and source buffer for an attribute array can be set by doing the following. First, the buffer that the attribute comes from must be bound to GL_ARRAY_BUFFER.

Note: The GL_ARRAY_BUFFER binding is NOT part of the VAO's state! I know that's confusing, but that's the way it is.

Once the buffer is bound, call one of these functions:

 void glVertexAttribPointer( GLuint index​, GLint size​, GLenum type​,
   GLboolean normalized​, GLsizei stride​, const void *offset​);
 void glVertexAttribIPointer( GLuint index​, GLint size​, GLenum type​,
   GLsizei stride​, const void *}}offset}} );
 void glVertexAttribLPointer( GLuint index​, GLint size​, GLenum type​,
   GLsizei stride​, const void *offset​ );

All of these functions do more or less the same thing. The difference between them will be discussed later. Note that the last function is only available on GL 4.1 or if ARB_vertex_attrib_64bit is available.

These functions say that the attribute index index​ will get its attribute data to whatever buffer object is currently bound to GL_ARRAY_BUFFER. It is vital to understand that this association is made when this function is called. For example, let's say we do this:

glBindBuffer(GL_ARRAY_BUFFER, buf1);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);

The first line binds buf1​ to the GL_ARRAY_BUFFER binding. The second line says that attribute index 0 gets its vertex array data from buf1​, because that's the buffer that was bound to GL_ARRAY_BUFFER when the glVertexAttribPointer was called.

The third line binds the buffer object 0 to the GL_ARRAY_BUFFER binding. What does this do to the association between attribute 0 and buf1​?

Nothing! Changing the GL_ARRAY_BUFFER binding changes nothing about vertex attribute 0. Only calls to glVertexAttribPointer can do that.

Think of it like this. glBindBuffer sets a global variable, then glVertexAttribPointer reads that global variable and stores it in the VAO. Changing that global variable after it's been read doesn't affect the VAO. You can think of it that way because that's exactly how it works.

This is also why GL_ARRAY_BUFFER is not VAO state; the actual association between an attribute index and a buffer is made by glVertexAttribPointer.

Note that it is an error to call the glVertexAttribPointer functions if 0 is currently bound to GL_ARRAY_BUFFER.

Vertex format

The glVertexAttribPointer functions state where an attribute index gets its array data from. But it also defines how OpenGL should interpret that data. This is conceptually broken down into two parts: the format and the buffer object information.

The format parameters describe how to interpret a single vertex of information from the array. Vertex Attributes in the Vertex Shader can be declared as a floating-point GLSL type (such as float​ or vec4​), an integral type (such as uint​ or ivec3​), or a double-precision type (such as double​ or dvec4​). Double-precision attributes are only available in GL 4.1/ARB_vertex_attrib_64bit.

The general type of attribute used in the vertex shader must match the general type provided by the attribute array. This is governed by which glVertexAttribPointer function you use. For floating-point attributes, you must use glVertexAttribPointer. For integer (both signed and unsigned), you must use glVertexAttribIPointer. And for double-precision attributes, where available, you must use glVertexAttribLPointer.

Each attribute index represents a vector of some type, from 1 to 4 components in length. The size​ parameter of the glVertexAttribPointer functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4. Note that size​ does not have to exactly match the size used by the vertex shader. If the vertex shader has fewer components than the attribute provides, then the extras are ignored. If the vertex shader has more components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the XYZW components.

Component type

The type of the vector component in the buffer object is given by the type​ and normalized​ parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different glVertexAttribPointer functions take different type​s. Here is a list of the types and their meanings for each function:


  • Floating-point types. normalized​ must be GL_FALSE
    • GL_HALF_FLOAT​: A 16-bit half-precision floating-point value. Equivalent to GLhalf​.
    • GL_FLOAT​: A 32-bit single-precision floating-point value. Equivalent to GLfloat​.
    • GL_DOUBLE​: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to GLdouble​.
    • GL_FIXED​: A 16.16-bit fixed-point two's complement value. Equivalent to GLfixed​.
  • Integer types; these are converted to floats automatically and with zero performance cost. If normalized​ is GL_TRUE, then the value will be converted to a float via integer normalization (an unsigned byte value of 255 becomes 1.0f). If normalized​ is GL_FALSE, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f).
    • GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte​.
    • GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte​.
    • GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort​.
    • GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort​.
    • GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint​.
    • GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint​.
    • GL_INT_2_10_10_10_REV​: A series of four values packed in a 32-bit unsigned integer. The packed values themselves are signed, but not the overall bitfield. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. All values are signed, two's complement integers. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
    • GL_UNSIGNED_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).


  • GL_BYTE​:
  • GL_SHORT​:
  • GL_INT​:



Here is a visual demonstration of the ordering of the 2_10_10_10_REV types:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              Z              |              Y              |               X            |

D3D compatibility

When using glVertexAttribPointer, and only this function (not the other forms), the size​ field can be a number 1-4, but it can also be GL_BGRA.

This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverses the order of the first 3 components.

This special mode is intended specifically for compatibility with a certain Direct3D format. Because of that, it can only be used with GL_INT_2_10_10_10_REV​ and GL_UNSIGNED_INT_2_10_10_10_REV​. Also, because if that, normalized​ must be GL_TRUE as well; you cannot pass non-normalized values.

Note: This mode should only be used if you have data that is formatted in this D3D style and you need to use it in your GL application. Don't bother otherwise.

Here is a visual description:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              X              |              Y              |               Z            |

Notice how X comes second and the Z last.

Vertex buffer offset and stride

The vertex format information above tells OpenGL how to interpret the data. The format says how big each vertex is in bytes and how to convert it into the values that the attribute in the vertex shader receives.

But OpenGL needs two more pieces of information before it can find the data. It needs a byte offset from the start of the buffer object to the first element in the array. So your arrays don't always have to start at the front of the buffer object. It also needs a stride, which represents how many bytes it is from the start of one element to the start of another.

The offset​​ defines the buffer object offset. Note that it is a parameter of type const void *​ rather than an integer of some kind. This is in part why it's called glVertexAttribPointer, due to old legacy stuff where this was actually a client pointer.

So you will need to cast the integer offset into a pointer. In C, this is done with a simple cast: (void*)(byteOffset)​. In C++, this can be done as such: reinterpret_cast<void*>(byteOffset)​.

The stride​ is used to decide if there should be bytes between vertices. If it is set to 0, then OpenGL will assume that the vertex data is tightly packed. So OpenGL will compute the stride from the given other components. So if you set the size​ to be 3, and type​ to be GL_FLOAT, OpenGL will compute a stride of 12 (4 bytes per float, and 3 floats per attribute).

Interleaved attributes

The main purpose of the stride​ attribute is to allow interleaving between different attributes. This is conceptually the difference between these two C++ definitions:

struct StructOfArrays
  GLfloat positions[VERTEX_COUNT * 3];
  GLfloat normals[VERTEX_COUNT * 3];
  GLubyte colors[VERTEX_COUNT * 4];
StructOfArrays structOfArrays;
struct Vertex
  GLfloat position[3];
  GLfloat normal[3];
  Glubyte color[4];
Vertex vertices[VERTEX_COUNT];

structOfArrays​ is a struct that contains several arrays of elements. Each array is tightly packed, but independent of one another. vertices​ is a single array, where each element of the array is an independent vertex.

If we have a buffer object which has had vertices​ uploaded to it, such that baseOffset​ is the byte offset to the start of this data, we can use the stride​ parameter to allow OpenGL to access it:

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(position, Vertex)));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(normal, Vertex)));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(color, Vertex)));

Note that each attribute uses the same stride: the size of the Vertex​ struct. C/C++ requires that the size of this struct be padded where appropriately such that you can get the next element in an array by adding that size in bytes to a pointer (ignoring pointer arithmetic, which will do all of this for you). Thus, the size of the Vertex​ structure is exactly the number of bytes from the start of one element to another, for each attribute.

The macro offsetof​ computes the byte offset of the given field in the given struct. This is added to the baseOffset​, so that each field points to the start of its own data relative to the beginning of the Vertex​ structure.

As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with each other.

Index buffers

Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object, sometimes called an Index Buffer Object or Element Buffer Object.

This buffer object is associated with the GL_ELEMENT_ARRAY_BUFFER binding target. This buffer object binding point is different from GL_ARRAY_BUFFER; it is stored within the VAO. This binding point is part of the VAO's state, and if no VAO is bound, then you cannot bind a buffer object to this binding target.

When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, all rendering commands of the form gl*Draw*Elements*​ will use the element buffer for indexed rendering. Indices can be unsigned bytes, unsigned shorts, or unsigned ints.

Instanced arrays

Instanced arrays
Core in version 4.5
Core since version 3.3
ARB extension ARB_instanced_arrays

Normally, vertex attribute arrays are indexed based on the index buffer, or when doing array rendering, once per vertex from the start point to the end. However, when doing instanced rendering, it is often useful to have an alternative means of getting per-instance data than accessing it directly in the shader via a Uniform Buffer Object, a Buffer Texture, or some other means.

It is possible to have one or more attribute arrays indexed, not by the index buffer or direct array access, but by the instance count. This is done via this function:

void glVertexAttribDivisor(GLuint index​, GLuint divisor​);

The index​ is the attribute index to set. If divisor​ is zero, then the attribute acts like normal, being indexed by the array or index buffer. If divisor​ is non-zero, then the current instance (as if from gl_InstanceID​) is divided by this divisor, and the result of that is used to access the attribute array.

This is generally considered the most efficient way of getting per-instance data to the vertex shader. However, it is also the most resource-constrained method. Virtually every OpenGL implementation only offers 16 4-vector attributes, some of which will be the actual per-vertex data. So that leaves less for your per-instance data. While the number of instances can be arbitrarily large (unlike UBO arrays), the amount of per-instance data is much smaller.

However, that should be plenty for a quaternion orientation and a position, for a simple transformation. That would even leave one float (the position only needs to be 3D) to provide a fragment shader an index to access an Array Texture.

Separate attribute format

Separate attribute format
Core in version 4.5
Core since version 4.3
Core ARB extension ARB_vertex_attrib_binding

glVertexAttribPointer and its variations are nice, but they unify two separate concepts into one function (from a hardware perspective): the vertex format for an attribute array, and the source data for that array. These concepts can be separated, allowing the user to separately specify the format of a vertex attribute from the source buffer. This also makes it easy to change the buffer binding for multiple attributes, since different attributes can pull from the same buffer location.

This separation is achieved by splitting the state into two pieces: a number of vertex buffer binding points, and a number of vertex format records.

The buffer binding points aggregate the following data:

  • The source buffer object.
  • The base offset for all vertex attributes that pull data from this binding point.
  • The stride for all vertex attributes that pull data from this binding point.
  • The instance divisor, which is used for all vertex attributes that pull data from this binding point.

The vertex format consists of:

  • The size, type and normalization of the vertex attribute data.
  • The buffer binding point it is associated with.
  • A byte offset from the base offset of its associated buffer binding point to where it's vertex data starts.

The functions that set the buffer binding point data are:

void glBindVertexBuffer(GLuint bindingindex​, GLuint buffer​, GLintptr offset​, GLintptr stride​);
void glVertexBindingDivisor(GLuint bindingindex​, GLuint divisor​);

glBindVertexBuffer is kind of like glBindBufferRange, but it is specifically intended for vertex buffer objects. The bindingindex​ is, as the name suggests, not a vertex attribute. It is a binding index, which can range from 0 to GL_MAX_VERTEX_ATTRIB_BINDINGS - 1. This will almost certainly be 16.

buffer​ is the buffer object that is being bound to this binding index. Note that there is no need to bind the buffer to GL_ARRAY_BUFFER; the function takes the buffer object directly. offset​ is a byte offset from the beginning of the buffer to where all of the associated attached buffer object data begins. stride​ is the byte offset from one vertex to the next.

Notice that the stride is uncoupled from the vertex format itself here. Also, stride​ can no longer be 0, since OpenGL doesn't know the format of the data yet, so it can't automatically compute it.

glVertexBindingDivisor is much like glVertexAttribDivisor, except applied to a binding index instead of an attribute index. All vertex attributes associated with this binding index will use the same divisor.

The functions that affect vertex attribute formats are:

void glVertexAttribFormat(GLuint attribindex​, GLint size​, GLenum type​, GLboolean normalized​, GLuint relativeoffset​);
void glVertexAttribIFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);
void glVertexAttribLFormat(GLuint attribindex​, GLint size​, GLenum type​, GLuint relativeoffset​);

The glVertexAttribFormat functions work similarly to their glVertexAttribPointer counterparts. attribindex​ is, as the name suggests, an actual attribute index, from 0 to GL_MAX_VERTEX_ATTRIBS​ - 1. size​, type​, and normalized​ all work as before.

relativeoffset​ is new. Vertex formats are associated with vertex buffer bindings from glBindVertexBuffer. So every vertex format that uses the same vertex buffer binding will use the same buffer object and the same offset. In order to allow interleaving (where different attributes need to offset themselves from the base offset), relativeoffset​ is used. It is effectively added to the buffer binding's offset to get the offset for this attribute.

Note that relativeoffset​ has much more strict limits than the buffer binding's offset​. The limit on relativeoffset​ is queried through GL_MAX_VERTEX_ATTRIB_RELATIVE_OFFSET, and is only guaranteed to be at least 2047 bytes. Also, note that relativeoffset​ is a GLuint​ (32-bits), while offset​ is a GLintptr​, which is the size of the pointer (so 64-bits in a 64-bit build). So obviously the relativeoffset​ is a much more limited quantity.

To associate a vertex attribute with a buffer binding, use this function:

void glVertexAttribBinding(GLuint attribindex​, GLuint bindingindex​);

The attribindex​ will use the buffer, offset, stride, and divisor, from bindingindex​.

Note that you still have to enable attribute arrays; this feature doesn't change that fact. It only changes the need to use glVertexAttribPointer.

This can be bit confusing, but it makes a lot more sense than the glVertexAttribPointer method once you see it. The simplest way is to go back to the Vertices​ example from the interleaving section. We have this struct of vertex data:

struct Vertex
  GLfloat position[3];
  GLfloat normal[3];
  Glubyte color[4];
Vertex vertices[VERTEX_COUNT];

Using glVertexAttribPointer, we bound this data like this:

glBindBuffer(GL_ARRAY_BUFFER, buff);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(position, Vertex)));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(normal, Vertex)));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(color, Vertex)));

Now, here is how we would do it using the new APIs:

glBindVertexBuffer(0, buff, baseOffset, sizeof(sizeof(Vertex)));
glVertexAttribFormat(0, 3, GL_FLOAT, GL_FALSE, offsetof(position, Vertex));
glVertexAttribBinding(0, 0);
glVertexAttribFormat(1, 3, GL_FLOAT, GL_FALSE, offsetof(normal, Vertex));
glVertexAttribBinding(1, 0);
glVertexAttribFormat(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, offsetof(color, Vertex));
glVertexAttribBinding(2, 0);

That's much clearer. The base offset to the beginning of the vertex data is very clear, as is the offset from this base to the start of each attribute. Better yet, if you want to use the same format but move the buffer around, it only takes one function call; namely glBindVertexBuffer with a buffer binding of 0.

Indeed, if lots of vertices use the same format, you can interleave them in the same way and only ever change the source buffer. This separation of buffer/stride/offset from vertex format can be a powerful optimization.

Note again that all of the above state is still VAO state. It is all encapsulated in vertex array objects.

Because all of this modifies how vertex attribute state works, glVertexAttribPointer is redefined in terms of this new division. It is defined as follows

void glVertexAttrib*Pointer(GLuint index​, GLint size​, GLenum type​, {GLboolean normalized​,} GLsizei stride​, const GLvoid * pointer​)
  glVertexAttrib*Format(index, size, type, {normalized,} 0);
  glVertexAttribBinding(index, index);
  GLuint buffer;
  glGetIntegerv(GL_ARRAY_BUFFER_BINDING, buffer);
  if(buffer == 0)
    glErrorOut(GL_INVALID_OPERATION); //Give an error.
  if(stride == 0)
    stride = CalcStride(size, type);
  GLintptr offset = reinterpret_cast<GLintptr>(pointer);
  glBindVertexBuffer(index, buffer, offset, stride);

Where CalcStride​ does what it sounds like. Note that glVertexAttribPointer does use the same index for the attribute format and the buffer binding. So calling it will overwrite anything you may have set into these bindings.

Similarly, glVertexAttribDivisor is defined as:

void glVertexAttribDivisor(GLuint index​, GLuint divisor​)
  glVertexAttribBinding(index, index);
  glVertexBindingDivisor(index, divisor);

So again, calling it will overwrite your vertex attribute format binding.

Note that while ARB_vertex_attrib_binding is still a new extension at the time of writing, it is not a hardware-based one. So it should be widely implemented on hardware that is still supported by OpenGL as they get around to it.

Matrix attributes

Attributes in GLSL can be of matrix types. However, our attribute binding functions only bind up to a dimensionality of 4. OpenGL solves this problem by converting matrix GLSL attributes into multiple attribute indices.

If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. The number of attributes a matrix takes up depends on the number of columns of the matrix: a mat2​ matrix will take 2, a mat2x4​ matrix will take 2, while a mat4x2​ will take 4. The size of each attribute is the number of rows of the matrix.

Each bound attribute in the VAO therefore fills in a single column, starting with the left-most and progressing right. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will naturally take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the first column, 4 is the second, and 5 is the last.

OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.

Double-precision matrices (where available) will take up twice as much space. So a dmat3x3​ will take up 6 attribute indices: the first two for the first row, the second two for the second row, and so forth.

Non-array attribute values

A vertex shader can read an attribute that is not currently enabled (via glEnableVertexAttribArray. The value that it gets is defined by special context state, which is *not* part of the VAO.

Because the attribute is defined by context state, it is constant over the course of a single draw call. Each attribute index has a separate value.

The initial value for these is a floating-point (0.0, 0.0, 0.0, 1.0)​. Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).

To change the value, you use a function of this form:

 void glVertexAttrib*(GLuint index​, Type values​);
 void glVertexAttribN*(GLuint index​, Type values​);
 void glVertexAttribP*(GLuint index​, GLenum type​, GLboolean normalized​, Type values​);
 void glVertexAttribI*(GLuint index​, Type values​);
 void glVertexAttribL*(GLuint index​, Type values​);

The * is the type descriptor, using the traditional OpenGL syntax. The index​ is the attribute index to set. The Type​ is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes.

The N​ version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The P​ versions are for packed integer types, and they can normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.

To provide non-array integral values for integral attributes, use the I​ versions. For double-precision attributes (using the same rules for attribute sizes as double-precision arrays), use L​.

Note that the fixed attribute values are not part of the VAO state; they are context state. Changes to them do not affect the VAO.

Note: It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for. They might be faster than uniforms, or they might not.


Once the VAO has been properly set up, the arrays of vertex data can be rendered as a Primitive. OpenGL provides innumerable different options for rendering vertex data.

See Also