VBOs slow and crashing...

I’ve just added VBOs to my display, and having problems. It worked fine with vertex arrays, and was a little faster than immediate mode. However, the VBOs are very slow, and crash when I add in textures…

Here’s the rendering code…


typedef int QUADIX[4];
QUADIX quads[20];
int vbo_v, vbo_n, vbo_t;

if (luse_vbo) {
    glEnableClientState(GL_VERTEX_ARRAY);
    glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbo_v);
    glVertexPointer(3, GL_DOUBLE, 0, 0);
    if (vbo_n > 0) {
        glEnableClientState(GL_NORMAL_ARRAY);
        glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbo_n);
        glNormalPointer(GL_DOUBLE, 0, 0);
    }
	// The textures are making this crash...
    //if (vbo_t > 0) {
    //    glEnableClientState(GL_TEXTURE_COORD_ARRAY);
    //    glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbo_t);
    //    glTexCoordPointer(3, GL_DOUBLE, 0, 0);
    //}
}
else {
    glEnableClientState(GL_VERTEX_ARRAY);
    glVertexPointer(3, GL_DOUBLE, 0, (double *)vertex_array);
    if (normals_count > 0) {
        glEnableClientState(GL_NORMAL_ARRAY);
        glNormalPointer(GL_DOUBLE, 0, (double *)normals_array);
    }
    if (texcoords_count > 0) {
        glEnableClientState(GL_TEXTURE_COORD_ARRAY);
        glTexCoordPointer(3, GL_DOUBLE, 0, (double *)texcoords_array);
    }
}

glDrawElements(GL_QUADS, quad_count*4, GL_UNSIGNED_INT, &quads);

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
if (luse_vbo) {
	glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
	glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);
}


and the initialisation code…


// VBO vertices...
if (vertex_count > 0) {
	glGenBuffersARB(1, &vbo_v);
	glBindBufferARB(GL_ARRAY_BUFFER, vbo_v);
	glBufferDataARB(GL_ARRAY_BUFFER, (int)(vertex_count * sizeof(POINT3)),
		vertex_array, GL_STATIC_DRAW);
}

// VBO normals...
if (normals_count > 0) {
	glGenBuffersARB(1, &vbo_n);
	glBindBufferARB(GL_ARRAY_BUFFER, vbo_n);
	glBufferDataARB(GL_ARRAY_BUFFER, (int)(normals_count * sizeof(POINT3)),
		normals_array, GL_STATIC_DRAW);
}

// VBO texture coords...
if (texcoords_count > 0) {
	glGenBuffersARB(1, &vbo_t);
	glBindBufferARB(GL_ARRAY_BUFFER, vbo_t);
	glBufferDataARB(GL_ARRAY_BUFFER, (int)(texcoords_count * sizeof(POINT2)),
		texcoord_array, GL_STATIC_DRAW);
}

// VBO indices...
if (index_count > 0) {
	glGenBuffersARB(1, &vbo_i);
	glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, vbo_i);
	glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER, (int)(index_count * sizeof(QUADIX)),
		index_array, GL_STATIC_DRAW);
}

// unbind buffers, so that normal array code will work later
glBindBufferARB(GL_ARRAY_BUFFER, 0);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, 0);


The VBO code crashes during rendering, on the call to glDrawElements(). The non-VBO code is fine, and the VBOs are still initialised, even when the VBO rendering code is not used. At this stage, I am binding the indices into an array, but I’m not using this during rendering time.

  • I am not using display lists.
  • The vertex, normals and texture arrays ARE the same size. They come from an OBJ file, but I reprocess them to be 1 to 1 correspondence.

I can’t see any fundamental difference between what the VBO code is doing differently from the array code. I must be misunderstanding something or leaving something out. Anyone know what it is?

Edit for more info = the VBO code always crashes on index_array[10]. My arrays are all dynamically allocated, and I have checked that they are big enough. Remember, everything works fine just using the vertex arrays.

glTexCoordPointer(<u>3</u>, GL_DOUBLE, 0, 0);

This is certainly why this crashes ; you use glTexCoordPointer for 3 coordinates per vertex, whereas your VBO is only allocated for 2 coordinates per vertex (your POINT2 structure). You should try with 2 instead of 3.

Performance-wise, you should not use GL_DOUBLE but GL_FLOAT ; you’ll have a hard time making something fast with doubles in OpenGL. You should also try to have only one VBO with every vertex stored in it.

Hope this helps, cheers,

Nicolas.

The GL_DOUBLE is not natively supported by most hw (if any). The driver needs to convert them to floats on the fly which is very ineffective when VBOs are used. Use GL_FLOAT instead.

And because it best holds then said three times: don’t use GL_DOUBLE, never!

:smiley:

Actually, the 3 is because it is an interleaved array. My POINT2 structure has an extra variable in it. For some reason I couldn’t get the stride parameter working properly (tried 0,1,2,4,8,16,32 - none worked), but specifying the texture as 3 doubles got around the problem. As I said, it works with arrays, just not with VBOs.

I will look at using floats for best performance, but for now, I’m just trying to get it working.

As gl_double is obviously never used, it wouldn’t be strange to have problematic driver behavior in your case. So really, first thing you should do is switch to gl_float.
Also, I wouldn’t rule-out cards/drivers having problems with denormalized doubles/floats (which your other 8 bytes will be usually interpreted as, unless a specific bit is always set). So, just do this to set the vtx ptr:
glVertexPointer(2, GL_FLOAT,16, 0);

Stride is the number of bytes between elements, in your case 3 * sizeof(double) => 24.

Ah yes, glVertexPointer(2, GL_FLOAT,24, 0);