# Difference between revisions of "Vertex Specification"

Vertex Specification is the process of setting up the necessary objects for rendering with a particular shader program, as well as the process of using those objects to render.

## Theory

Submitting vertex data for rendering requires creating a stream of vertices, and then telling OpenGL how to interpret that stream.

### Vertex Stream

In order to render at all, you must be using a shader program. This program has a list of expected Vertex Attributes. This set of attributes determines what attribute values you must send in order to properly render with this shader.

For each attribute in the shader, you must provide a list of data for that attribute. All of these lists must have the same number of elements.

The order of vertices in the stream is very important; it determines how OpenGL will render your mesh. The order of the stream can either be the order of data in the arrays, or you can specify a list of indices. The indices control what order the vertices are received in, and indices can specify the same vertex more than once.

Let's say you have the following as your array of 3d position data:

 { {1, 1, 1}, {0, 0, 0}, {0, 0, 1} }


If you simply use this as a stream as is, OpenGL will receive and process these three vertices in order (left-to-right). However, you can also specify a list of indices that will select which vertices to use and in which order.

Let's say we have the following index list:

 {2, 1, 0, 2, 1, 2}


If we render with the above attribute array, but selected by the index list, OpenGL will receive the following stream of vertex attribute data:

 { {0, 0, 1}, {0, 0, 0}, {1, 1, 1}, {0, 0, 1}, {0, 0, 0}, {0, 0, 1} }


The index list is a way of reordering the vertex attribute array data without having to actually change it. This is mostly useful as a means of data compression; in most tight meshes, vertices are used multiple times. Being able to store the vertex attributes for that vertex only once is very economical, as a vertex's attribute data is generally around 32 bytes, while indices are usually 2-4 bytes in size.

A vertex stream can of course have multiple attributes. You can take the above position array and augment it with, for example, a texture coordinate array:

 { {0, 0}, {0.5, 0}, {0, 1} }


The vertex stream you get will be as follows:

 { [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{1, 1, 1}, {0, 0}], [{0, 0, 1}, {0, 1}], [{0, 0, 0}, {0.5, 0}], [{0, 0, 1}, {0, 1}] }

Note: Oftentimes, authoring tools will have similar attribute arrays, but the sizes will be different. These tools give each attribute array a separate index list; this makes each attribute list smaller. OpenGL (and Direct3D, if you're wondering) does not allow this. Each attribute array must be the same size, and each index corresponds to the same location in each attribute array.. You must manually convert the format exported by your authoring tool into the format described above.

### Primitives

The above stream is not enough to actually get anything; you must tell OpenGL how to interpret this stream in order to get proper rendering. And this means telling OpenGL what kind of primitive to interpret the stream as.

There are many ways for OpenGL to interpret a stream of 12 vertices. It can interpret the vertices as a sequence of triangles, points, or lines. It can even interpret these differently; it can interpret 12 vertices as 4 independent triangles (take every 3 verts as a triangle), as 10 dependent triangles (every group of 3 sequential vertices in the stream is a triangle), and so on.

The main article on this subject has the details.

Now that we understand the theory, let's look at how it is implemented in OpenGL. Vertex data is provided to OpenGL as arrays. Thus, OpenGL needs two things: the arrays themselves and a description of how to interpret the bytes of those arrays.

## Vertex Array Object

Core in version 4.5 3.0 ARB_vertex_array_object

A Vertex Array Object (VAO) is an OpenGL Object that encapsulates all of the state needed to specify vertex data (with one minor exception noted below). They define the format of the vertex data as well as the sources for the vertex arrays. Note that VAOs do not contain the arrays themselves; the arrays are contained in buffer objects (see below). The VAOs simply reference already existing buffer objects.

As OpenGL Objects, VAOs have the usual creation, destruction, and binding functions: , , and . The latter is different, in that there is no "target" parameter; there is only one target for VAOs, and binds to that target.

Note: VAOs cannot be shared between OpenGL contexts.

Vertex attributes are numbered from 0 to GL_MAX_VERTEX_ATTRIBS - 1. Each attribute array can be enabled for array access or disabled. When an attribute array is disabled, any attempts by the vertex shader to read from that attribute will produce a constant value (see below) instead of a value pulled from an array.

A newly-created VAO has all of the arrays disabled. Arrays are enabled by binding the VAO in question and calling:

void glEnableVertexAttribArray​(GLuint index​);


There is a similar function to disable an enabled array.

Remember: all of the state below is part of the VAO's state (except where explicitly stated that it is not). Thus, all of the state below is captured by the VAO.

## Vertex Buffer Object

A Vertex Buffer Object (VBO) is a Buffer Object which is used as the source for vertex array data. It is no different from any other buffer object, and a buffer object used for Transform Feedback or asynchronous pixel transfers can be used as source values for vertex arrays.

The format and source buffer for an attribute array can be set by doing the following. First, the buffer that the attribute comes from must be bound to GL_ARRAY_BUFFER.

Note: The GL_ARRAY_BUFFER binding is NOT part of the VAO's state! I know that's confusing, but that's the way it is.

Once the buffer is bound, call one of these functions:

 void glVertexAttribPointer​( GLuint index​, GLint size​, GLenum type​,
GLboolean normalized​, GLsizei stride​, const void *offset​);
void glVertexAttribIPointer​( GLuint index​, GLint size​, GLenum type​,
GLsizei stride​, const void *}}offset}} );
void glVertexAttribLPointer​( GLuint index​, GLint size​, GLenum type​,
GLsizei stride​, const void *offset​ );


All of these functions do more or less the same thing. The difference between them will be discussed later. Note that the last function is only available on GL 4.1 or if ARB_vertex_attrib_64bit is available.

These functions say that the attribute index index​ will get its attribute data to whatever buffer object is currently bound to GL_ARRAY_BUFFER. It is vital to understand that this association is made when this function is called. For example, let's say we do this:

glBindBuffer(GL_ARRAY_BUFFER, buf1);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);

The first line binds buf1​ to the GL_ARRAY_BUFFER binding. The second line says that attribute index 0 gets its vertex array data from buf1​, because that's the buffer that was bound to GL_ARRAY_BUFFER when the was called.

The third line binds the buffer object 0 to the GL_ARRAY_BUFFER binding. What does this do to the association between attribute 0 and buf1​?

Nothing! Changing the GL_ARRAY_BUFFER binding changes nothing about vertex attribute 0. Only calls to can do that.

Think of it like this. sets a global variable, then reads that global variable and stores it in the VAO. Changing that global variable after it's been read doesn't affect the VAO. You can think of it that way because that's exactly how it works.

This is also why GL_ARRAY_BUFFER is not VAO state; the actual association between an attribute index and a buffer is made by .

Note that it is an error to call the functions if 0 is currently bound to GL_ARRAY_BUFFER.

## Vertex format

The functions state where an attribute index gets its array data from. But it also defines how OpenGL should interpret that data. This is conceptually broken down into two parts: the format and the buffer object information.

The format parameters describe how to interpret a single vertex of information from the array. Vertex Attributes in the Vertex Shader can be declared as a floating-point GLSL type (such as float​ or vec4​), an integral type (such as uint​ or ivec3​), or a double-precision type (such as double​ or dvec4​). Double-precision attributes are only available in GL 4.1/ARB_vertex_attrib_64bit.

The general type of attribute used in the vertex shader must match the general type provided by the attribute array. This is governed by which function you use. For floating-point attributes, you must use . For integer (both signed and unsigned), you must use . And for double-precision attributes, where available, you must use .

Each attribute index represents a vector of some type, from 1 to 4 components in length. The size​ parameter of the functions defines the number of components in the vector provided by the attribute array. It can be any number 1-4. Note that size​ does not have to exactly match the size used by the vertex shader. If the vertex shader has fewer components than the attribute provides, then the extras are ignored. If the vertex shader has more components than the array provides, the extras are given values from the vector (0, 0, 0, 1) for the XYZW components.

### Component type

The type of the vector component in the buffer object is given by the type​ and normalized​ parameters, where applicable. This type will be converted into the actual type used by the vertex shader. The different functions take different type​s. Here is a list of the types and their meanings for each function:

• Floating-point types. normalized​ must be GL_FALSE
• GL_HALF_FLOAT​: A 16-bit half-precision floating-point value. Equivalent to GLhalf​.
• GL_FLOAT​: A 32-bit single-precision floating-point value. Equivalent to GLfloat​.
• GL_DOUBLE​: A 64-bit double-precision floating-point value. Never use this. It's technically legal, but almost certainly a performance trap. Equivalent to GLdouble​.
• GL_FIXED​: A 16.16-bit fixed-point two's complement value. Equivalent to GLfixed​.
• Integer types; these are converted to floats automatically and with zero performance cost. If normalized​ is GL_TRUE, then the value will be converted to a float a signed or unsigned normalized type. Otherwise, it will be converted directly to a float as if by C-style casting (255 becomes 255.0f).
• GL_BYTE​: A signed 8-bit two's complement value. Equivalent to GLbyte​.
• GL_UNSIGNED_BYTE​: An unsigned 8-bit value. Equivalent to GLubyte​.
• GL_SHORT​: A signed 16-bit two's complement value. Equivalent to GLshort​.
• GL_UNSIGNED_SHORT​: An unsigned 16-bit value. Equivalent to GLushort​.
• GL_INT​: A signed 32-bit two's complement value. Equivalent to GLint​.
• GL_UNSIGNED_INT​: An unsigned 32-bit value. Equivalent to GLuint​.
• GL_INT_2_10_10_10_REV​: A series of four values packed in a 32-bit unsigned integer. The packed values themselves are signed, but not the overall bitfield. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. All values are signed, two's complement integers. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
• GL_UNSIGNED_INT_2_10_10_10_REV: A series of four values packed in a 32-bit unsigned integer. The packed values are unsigned. The bitdepth for the packed fields are 2, 10, 10, and 10, but in reverse order. So the lowest-significant 10-bits are the first component, the next 10 bits are the second component, and so on. If you use this, the size​ must be 4 (or GL_BGRA, as shown below).
• GL_BYTE​:
• GL_UNSIGNED_BYTE​:
• GL_SHORT​:
• GL_UNSIGNED_SHORT​:
• GL_INT​:
• GL_UNSIGNED_INT​:
• GL_DOUBLE

Here is a visual demonstration of the ordering of the 2_10_10_10_REV types:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              Z              |              Y              |               X            |
-----------------------------------------------------------------------------------------------


### D3D compatibility

When using , and only this function (not the other forms), the size​ field can be a number 1-4, but it can also be GL_BGRA.

This is somewhat equivalent to a size of 4, in that 4 components are transferred. However, as the name suggests, this "size" reverse the order of the first 3 components.

This special mode is intended specifically for compatibility with a certain Direct3D format. Because of that, it can only be used with GL_INT_2_10_10_10_REV​ and GL_UNSIGNED_INT_2_10_10_10_REV​. Also, because if that, normalized​ must be GL_TRUE as well; you cannot pass non-normalized values.

Note: This mode should only be used if you have data that is formatted in this D3D style and you need to use it in your GL application. Don't bother otherwise.

Here is a visual description:

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1  0
|  W |              X              |              Y              |               Z            |
-----------------------------------------------------------------------------------------------


Notice how X comes second and the Z last.

## Vertex buffer offset and stride

The vertex format information above tells OpenGL how to interpret the data. The format says how big each vertex is in bytes and how to convert it into the values that the attribute in the vertex shader receives.

But OpenGL needs two more pieces of information before it can find the data. It needs a byte offset from the start of the buffer object to the first element in the array. So your arrays don't always have to start at the front of the buffer object. It also needs a stride, which represents how many bytes it is from the start of one element to the start of another.

The offset​​ defines the buffer object offset. Note that it is a parameter of type const void *​ rather than an integer of some kind. This is in part why it's called glVertexAttribPointer, due to old legacy stuff where this was actually a client pointer.

So you will need to cast the integer offset into a pointer. In C, this is done with a simple cast: (void*)(byteOffset)​. In C++, this can be done as such: reinterpret_cast<void*>(byteOffset)​.

The stride​ is used to decide if there should be bytes between vertices. If it is set to 0, then OpenGL will assume that the vertex data is tightly packed. So OpenGL will compute the stride from the given other components. So if you set the size​ to be 3, and type​ to be GL_FLOAT, OpenGL will compute a stride of 12 (4 bytes per float, and 3 floats per attribute).

### Interleaved attributes

The main purpose of the stride​ attribute is to allow interleaving between different attributes. This is conceptually the difference between these two C++ definitions:

struct StructOfArrays
{
GLfloat positions[VERTEX_COUNT * 3];
GLfloat normals[VERTEX_COUNT * 3];
GLubyte colors[VERTEX_COUNT * 4];
};

StructOfArrays structOfArrays;

struct Vertex
{
GLfloat position[3];
GLfloat normal[3];
Glubyte color[4];
};

Vertex vertices[VERTEX_COUNT];

structOfArrays​ is a struct that contains several arrays of elements. Each array is tightly packed, but independent of one another. vertices​ is a single array, where each element of the array is an independent vertex.

If we have a buffer object which has had vertices​ uploaded to it, such that baseOffset​ is the byte offset to the start of this data, we can use the stride​ parameter to allow OpenGL to access it:

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(position, Vertex)));
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(normal, Vertex)));
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), reinterpret_cast<void*>(baseOffset + offsetof(color, Vertex)));

Note that each attribute uses the same stride: the size of the Vertex​ struct. C/C++ requires that the size of this struct be padded where appropriately such that you can get the next element in an array by adding that size in bytes to a pointer (ignoring pointer arithmetic, which will do all of this for you). Thus, the size of the Vertex​ structure is exactly the number of bytes from the start of one element to another, for each attribute.

The macro offsetof​ computes the byte offset of the given field in the given struct. This is added to the baseOffset​, so that each field points to the start of its own data relative to the beginning of the Vertex​ structure.

As a general rule, you should use interleaved attributes wherever possible. Obviously if you need to change certain attributes and not others, then interleaving the ones that change with those that don't is not a good idea. But you should interleave the constant attributes with each other, and the changing attributes with each other.

## Index buffers

Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object, sometimes called an Index Buffer Object or Element Buffer Object.

This buffer object is associated with the GL_ELEMENT_ARRAY_BUFFER binding target. This buffer object binding point is different from GL_ARRAY_BUFFER; it is stored within the VAO. This binding point is part of the VAO's state, and if no VAO is bound, then you cannot bind a buffer object to this binding target.

When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER, all rendering commands of the form gl*Draw*Elements*​ will use the element buffer for indexed rendering. Indices can be unsigned bytes, unsigned shorts, or unsigned ints.

## Instanced arrays

Core in version 4.5 3.3 ARB_instanced_arrays

Normally, vertex attributes are

## Separate attribute format

Core in version 4.5 4.3 ARB_vertex_attrib_binding

and its variations are nice, but they unify two separate concepts into one function (from a hardware perspective): the vertex format for an attribute array, and the source data for that array. These concepts can be separated, allowing the user to separately specify the format of a vertex attribute from the source buffer. This also makes it easy to change the buffer binding for multiple attributes, since different attributes can pull from the same buffer location.

This separation is achieved by

## Matrix attributes

Attributes in GLSL can be of matrix types. However, our attribute binding functions only bind up to a dimensionality of 4. OpenGL solves this problem by converting matrix GLSL attributes into multiple attribute indices.

If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. How many it takes depends on the height of the matrix: a mat2x2​ matrix will take 2, while a mat2x4​ matrix will take 4. The size of each attribute is the width of the matrix.

Each bound attribute in the VAO therefore fills in a single row, starting with the top-most and progressing down. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will naturally take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the top row, 4 is the middle, and 5 is the bottom.

If you let GLSL apply attributes manually, and query them with , then OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.

Double-precision matrices (where available) will take up twice as much space. So a dmat3x3​ will take up 6 attribute indices: the first two for the first column, the second two for the second column, and so forth.

## Non-array attribute values

A vertex shader can read an attribute which was not provided by a VAO's attribute array. The value that it gets is defined by special context state.

Because the attribute is defined by context state, it is fixed within a single draw call. So the attribute does not change. Each attribute index has a separate value.

The initial value for these is a floating-point (0.0, 0.0, 0.0, 1.0)​. Just as with array attribute values, non-array values are typed to float, integral, or double-precision (where available).

To change the value, you use a function of this form:

 void glVertexAttrib*​(GLuint index​, Type values​);
void glVertexAttribN*​(GLuint index​, Type values​);
void glVertexAttribP*​(GLuint index​, GLenum type​, GLboolean normalized​, Type values​);
void glVertexAttribI*​(GLuint index​, Type values​);
void glVertexAttribL*​(GLuint index​, Type values​);


The * is the type descriptor, using the traditional OpenGL syntax. The index​ is the attribute index to set. The Type​ is whatever type is appropriate for the * type specifier. If you set fewer than 4 of the values in the attribute, the rest will be filled in by (0, 0, 0, 1), as is the same with array attributes.

The N​ version of these functions provide values that are normalized, either signed or unsigned as per the function's type. The unadorned versions always assume integer values are not normalized. The P​ versions are for packed integer types, and they can normalized or not. All three of these variants provide float attribute data, so they convert integers to floats.

To provide non-array integral values for integral attributes, use the I​ versions. For double-precision attributes (using the same rules for attribute sizes as double-precision arrays), use L​.

Note that the fixed attribute values are not part of the VAO state; they are context state. Changes to them do not affect the VAO.

Note: It is not recommended that you use these. The performance characteristics of using fixed attribute data are unknown, and it is not a high-priority case that OpenGL driver developers optimize for. They might be faster than uniforms, or they might not.

Making a vertex stream in OpenGL requires using two kinds of objects: Vertex Array Objects (VAO) and Vertex Buffer Objects (VBO). VBOs store the actual vertex and index arrays, while VAOs store the settings for interpreting the data in those arrays.

The first step is to create a VAO and bind it to the context.

## Vertex Format

Each attribute in the VAO has its own binding point with its own parameters. In the Vertex Array Objects article, we used this pseudocode to explain the state that goes into an attribute binding:

struct VertexAttribute
{
bool             bIsEnabled          = GL_FALSE;
int              iSize               = 4; //This is the number of elements in this attribute, 1-4.
unsigned int     iStride             = 0;
VertexAttribType eType               = GL_FLOAT;
bool             bIsNormalized       = GL_FALSE;
bool             bIsIntegral         = GL_FALSE;
void *           pBufferObjectOffset = 0;
BufferObject *   pBufferObj          = 0;
};

Recall that this binding data is set by one of the family of functions.

Similarly to how pixel transfer operations have an internal format (the format of the image data in the texture or framebuffer) and an external format (the format of the image data in client memory or a buffer object), each attribute has an actual format and the format of the data you are passing.

The actual format is defined by the shading language. So the internal attribute format changes depending on which shader you use the VAO with. Every vertex shader has an expected list of attributes. Each attribute has a particular dimensionality and expected type. The type can be float or integral.

For example, if you define an attribute in GLSL as an ivec3​, this means that the dimensionality of the attribute is 3 and the type is integral.

OpenGL is quite flexible in conversions between the data in your buffer objects and what the attribute expects. OpenGL can convert any data from your attribute format to the destination attribute format as long as it has the correct type. If the shader attribute is of integral type, you must use to attach the attribute data. The same goes for floating-point attributes and . If you use a double-precision float attribute (dvec3​), then you must use .

Other than this, OpenGL will make conversions as necessary. Normalized integers are converted on the expected range. Non-normalized integers given to floating-point attributes are converted to floating point values.

If there is a mismatch between the incoming dimensionality and the attribute's dimensionality, OpenGL will satisfy it. If the vertex data has more components than the shader attribute uses, the extra components are ignored. If the vertex data specifies fewer than the shader attribute uses, then the unfilled-in components are 0, except for the 4th component which is set to 1.

The iSize​ value is the dimensionality of the attribute you are sending. It is an integer from 1-4.

Note: The size value you send with can also be GL_BGRA, which is equivalent to 4, but specifies that the order of the components in the buffer object is switched from the expected RGBA. This value is only allowed in . It requires that you use normalization and only use unsigned byte types, and it will force both of these. This feature is not very flexible, because it was primarily introduced for Direct3D compatibility reasons; you probably are better off not using it unless you're trying to be data compatible with D3D.

The bIsIntegral​ value specifies whether the value is an integer or float. If it is false, then it is a floating-point value; if it is true, then it is integral. As previously mentioned, this must match with the attribute definition in the shader. This value is set based on which of the attribute functions you use: "IPointer" sets it to true, the regular "Pointer" version sets it to false.

Compatibility Note: In a compatibility profile, you are required to use attribute index 0 for some attribute. If you do not, then rendering will not take place. Both the shader and the vertex array state must have an attribute index 0.

### Data Conversion

The eType​ is the type of the data in the buffer object, not the attribute's type. This value and bIsNormalized​ define how the value in the buffer is interpreted and converted into the attribute's type in the shader.

The possible values of eType​ are broken into categories: integer and floating-point.

• Integer types allow signed or unsigned forms of bytes, shorts (16-bit), and int (32-bit).
• Floating-point types are float (32-bit), double (64-bit), or half (16-bit half float values).

Floating-point buffer types convert to floating-point attribute types directly. Integer types can be converted to floating point types in one of two ways: either directly (convert the number into a float value) or via normalization.

Normalization, activated by setting bIsNormalized​ to true, causes unsigned integers to be converted into floating point values on the range [0, 1], and signed integers are converted to [-1, 1].

• An integer value of 0 gives a normalized float value of 0.0.
• To get a float value of 1.0, you pass in the largest possible integer for that size (unsigned bytes == 255, signed bytes == 127, etc).
• To get -1, you pass the most negative integer value (signed bytes == -128, signed shorts == -32768, etc).

### Stride and Interleaving

The placement of vertex attribute data within the buffer is governed by the eType​, iSize​, iStride​, and pBufferObjectOffset​ fields. The type and size determine how big an individual value for the attribute is; 2 unsigned bytes takes up, well two bytes. The iStride​ specifies how many bytes it takes to get from one element to the next.

You might think that this would simply be the size of the attribute data: the type * iSize. However, OpenGL is very flexible in how you can position data in the buffer. The stride field allows you to interleave vertex data within a buffer. That is, you can put the data for multiple attributes in the same buffer.

Let's say that your vertex data is built by a hard-coded C structure:

struct Vertex
{
float x, y, z;
unsigned char r, g, b, a;
float u, v;
};

You would like your vertex data in your vertex buffer to simply be an array of such structures. So when you bind your buffer and fill it with data, you give it a Vertex*​ of some particular length.

The position attribute would have a pBufferObjectOffset​ value of 0, since it is the first entry. It's size is 3 and type is GL_FLOAT. This means that the position attribute of the first vertex in the array comes from the 0'th byte in the buffer, and OpenGL will pull 12 bytes (sizeof(float) * 3) to get that data. However, you need OpenGL to jump forward 24 bytes, the size of Vertex​ to get to the next position. You do this by setting the iStride​ to 24.

The color attribute would have a pBufferObjectOffset​ value of 12, since they start 3 floats into the struct. The size is 4 and the type is GL_UNSIGNED_BYTE (and it is normalized). The stride is also 24. Remember that stride is the number of bytes from one vertex value to the next. Each attribute has the same stride: 24.

You can compute the stride of any structure that you want to use in a buffer object with the C sizeof​ feature. If you want to compute the offset of a component within a struct, for the pBufferObjectOffset​, you may use the macro offsetof​ (included in stddef.h​ header file). This would be done as follows: offsetof(Vertex, r)​.

A stride value of 0 has a special meaning: it means the attribute data is tightly packed. So a stride of zero always means that the stride is sizeof(eType) * iSize​.

It is very possible, and usually desired for performance reasons, to use a single buffer object for all attributes of a mesh's data. Indeed, thanks to the buffer object offset, it is possible to put the data for many meshes in the same buffer object. You simply use or to update different sections of it, and then set the offset when you are building your VAO.

### Matrix Attributes

Attributes in GLSL can be of matrix types. However, our attribute binding functions only bind up to a dimensionality of 4. OpenGL solves this problem by converting matrix GLSL attributes into multiple attribute indices.

If you directly assign an attribute index to a matrix type, it implicitly takes up more than one attribute index. How many it takes depends on the height of the matrix: a 2x2 matrix will take 2, while a 2x4 matrix will take 4. The size of each attribute is the width of the matrix.

Each bound attribute in the VAO therefore fills in a single row, starting with the top-most and progressing down. Thus, if you have a 3x3 matrix, and you assign it to attribute index 3, it will naturally take attribute indices 3, 4, and 5. Each of these indices will be 3 elements in size. Attribute 3 is the top row, 4 is the middle, and 5 is the bottom.

If you let GLSL apply attributes manually, and query them with , then OpenGL will allocate locations for matrix attributes contiguously as above. So if you defined a 3x3 matrix, it will return one value, but the next two values are also valid, active attributes.

## Index Data

If you intend to use indices for your vertex arrays, you will need an index list. This is simply a buffer object that contains a list of integer values. It is best that this not be the same buffer object you use for attributes.

To attach this to the VAO, simply call glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bufferObj);​ with your buffer object. Once attached, any indexed drawing calls will pull from this buffer. Attempting to use indexed drawing calls without a buffer bound to GL_ELEMENT_ARRAY_BUFFER is an error.

## Finished VAO

At this point, your VAO is finished. You can render with it immediately, or you can bind another VAO and build a new one (or render with it). It's probably a good idea not to change a VAO once you've created it.

## Drawing

Once the VAO has been properly set up, the arrays of vertex data can be rendered as a Primitive. OpenGL has numerous different options for rendering vertex data.