Use VBO with glInterleavedArray

I am trying to use glInterleavedArrays with a Vertex Buffer and can’t seem to get it to work…

I define my vertex structure as follows:

struct VBOVertex 
{
    float tu;
    float tv;
    float red;
    float green;
    float blue;
    float alpha;
    float nx;
    float ny;
    float nz;
    float vx;
    float vy;
    float vz;
};

I define my vertex data as follows:

//  GL_T2F_C4F_N3F_V3F
VBOVertex g_pCubeVertices[8] = { {1.0, 1.0,    1.0, 1.0, 1.0, 0.0,     0.0, 0.0, 1.0,      1.0,  1.0,  1.0},
                                 {0.0, 1.0,    1.0, 1.0, 0.0, 0.0,     0.0, 0.0, 1.0,     -1.0,  1.0,  1.0},
                                 {0.0, 0.0,    1.0, 0.0, 0.0, 0.0,     0.0, 0.0, 1.0,     -1.0, -1.0,  1.0},
                                 {1.0, 0.0,    1.0, 0.0, 1.0, 0.0,     0.0, 0.0, 1.0,      1.0, -1.0,  1.0},
                                 {0.0, 0.0,    0.0, 0.0, 1.0, 0.0,     1.0, 0.0, 0.0,      1.0, -1.0, -1.0},
                                 {0.0, 1.0,    0.0, 1.0, 1.0, 0.0,     1.0, 0.0, 0.0,      1.0,  1.0, -1.0},
                                 {1.0, 1.0,    0.0, 1.0, 0.0, 0.0,     0.0, 1.0, 0.0,     -1.0,  1.0, -1.0},
                                 {0.0, 0.0,    0.0, 0.0, 0.0, 0.0,    -1.0, 0.0, 0.0,     -1.0, -1.0, -1.0}};

// indices
GLubyte g_pCubeIndicesTriangles[] = {   0,1,2,
                                        0,2,3,
                                        5,0,3,
                                        5,3,4,
                                        6,5,4,
                                        6,4,7,
                                        1,6,7,
                                        1,7,2,
                                        0,6,1,
                                        0,5,6,
                                        3,4,7,
                                        3,7,2};

I initialize my VBO as follows;

glGenBuffers(1, &vboId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);

GLsizeiptr sizeOfVBO  = sizeof(g_pCubeVertices) * 8; 
glBufferData(GL_ARRAY_BUFFER, sizeOfVBO, 0, GL_STATIC_DRAW_ARB);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeOfVBO, g_pCubeVertices);                             

And This is my render loop code:

glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

glMatrixMode( GL_MODELVIEW );
glLoadIdentity();

cameraAngleX += 0.01;
cameraAngleY += 0.01;

glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboId);

    glPushMatrix();
        glLoadIdentity();

        glTranslatef(0, 0, cameraDistance);
        glRotatef(cameraAngleX, 1, 0, 0);
        glRotatef(cameraAngleY, 0, 1, 0);

            glInterleavedArrays( GL_T2F_C4F_N3F_V3F, 0, g_pCubeVertices );
            glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, g_pCubeIndicesTriangles);

    glPopMatrix();

glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);

glutSwapBuffers();

I do not see anything when rendering. What am I doing wrong? Is it possible to use interleaved array data with VBO? Is there a better, faster, way.

When I separate my data into separate arrays and render using glDrawElements and a VBO, it works.

Ed

First of all, aware of using this function, because it enables client arrays states (like glEnableClientState(GL_VERTEX_ARRAY)), and you should disable it by yourself after the drawing execution.

Then, when you specify VBO size, you multiply 8 by sizeof(g_pCubeVertices), but it is already an array of 8 elements, so you have lots of spare data. The correct is sizeof(VBOVertex).
Using bytes as indices is not the fastest way.
Remember to reset VBO buffer binding after drawing by setting glBindBuffer(…, 0);

And the last and the most important is that after you called glBindBufferARB(), vertex pointers are specified in this buffer memory space, so you have to call

glInterleavedArrays(GL_T2F_C4F_N3F_V3F, 0, NULL );

Hope, that helps.

Just out of curiosity, what is the best type to use for indices?

Originally posted by thinks:
Just out of curiosity, what is the best type to use for indices?
The smallest type you can fit your indices into.

Regards
elFarto

Then wouldn’t that be bytes?

CD

Originally posted by elFarto:
[b] [quote]Originally posted by thinks:
Just out of curiosity, what is the best type to use for indices?
The smallest type you can fit your indices into.

Regards
elFarto [/b][/QUOTE]

Originally posted by CelticDaddio:
[b] Then wouldn’t that be bytes?

CD[/b]
If you’ve got less than 257 vertices, then yes.

Regards
elFarto

I’ve just tested interleaved vs. non interleaved arrays on GeForce 7800GT - equal speed.
Seems like modern GPU’s can cache 4 independend arrays as effectively as one interleaved, and since 4 attributes is as far as interleaved arrays go it seems like interleaved arrays offer no advantages today.

My test case was 12288 vertices with GL_T2F_C4F_N3F_V3F format.

I read in some NVidia performances docs, that if size of all vertex attributes is less than 256 bytes then GPU can accelerate such vertex format.

edit: need to clarify this… If all vertex attributes are in 256 bytes block

Originally posted by CelticDaddio:
[b] Then wouldn’t that be bytes?

CD[/b]
I believe most hardware does not support bytes natively, causing the driver to convert, which is bad.

Unsigned short and unsigned int are the common types used for indices and are natively supported on all hardware that I know of.

One of the problems with bytes is that you can only fit 256 vertices, which means a maximum of 450 triangles (for a 16x16 vertex terrain patch for example), and you generally want to supply more than 450 triangles in a single glDraw* call.

In my experience 300 triangles is the cross-over point on NVIDIA where glDrawRangeElements overhead becomes a non-issue, incase that serves as any sort of useful yardstick.

As far as I know ATI drivers have a higher overhead, so 450 triangles is about the bare minimum you want to draw at once.

I am hopeful that the OpenGL Longs Peak overhead reductions will make smaller meshes useful, as right now the overhead is significant (Not as significant as in Direct3D9 however!).