Hi,
I’ve just downloaded the latest NVIDIA drivers for Linux (IA32) and to say the least I’m very impressed that this release supports full OpenGL1.4 and even more impressed that ARB_vertex_buffer_object is already supported.
But I’ve come into a problem using ARBvbo (on a GeForce4 MX 440, if it matters), where the “traditional” vertex arrays render everything perfectly.
Here is the piece of code :
void render(const Mesh& mesh)
{
static bool first = true;
static GLubyte* data;if (first) { unsigned int vertex_offset = 0; unsigned int vertex_bytes = 4*sizeof(GLfloat)* mesh.getVertexCount(); unsigned int color_offset = vertex_offset + vertex_bytes; unsigned int color_bytes = 4*sizeof(GLfloat)* mesh.getColorCount(); unsigned int last_offset = color_offset + color_bytes; data = (GLubyte*)malloc(last_offset); if (mesh.getVertexCount() > 0) memcpy(data+vertex_offset, mesh.getVertex(0).ptr(), vertex_bytes); if (mesh.getColorCount() > 0) memcpy(data+color_offset, mesh.getColor(0).ptr(), color_bytes);
#ifdef USE_BUFFER_OBJECTS
// Create buffer object
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 1);// Initialize data store of buffer object glBufferDataARB(GL_ARRAY_BUFFER_ARB, last_offset, data, GL_STATIC_DRAW_ARB); free(data); data = NULL;
#endif
glVertexPointer(4, GL_FLOAT, 0, (void*)(data+vertex_offset)); glColorPointer(4, GL_FLOAT, 0, (void*)(data+color_offset)); printf("Buffer object initialized %d bytes
", last_offset);
printf("glGetError returned %d
", glGetError());
first=false;
}glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_COLOR_ARRAY);
#if USE_BUFFER_OBJECTS
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 1);
#endif
glBegin(GL_TRIANGLE_STRIP);
for
(unsigned int i = 0 ; i < mesh.numPrimitives() ; i++)
{
for
(unsigned int j = 0; j < mesh.vpp() ; j++)
{
glArrayElement(mesh.getPrimitive(i)[j]);
}
}
glEnd();glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_COLOR_ARRAY);
}
If I don’t define USE_BUFFER_OBJECTS, everything is rendered correctly at 120 fps ; and if USE_BUFFER_OBJECTS is defined, nothing appears and the fps drops to 18 fps.
I roughly copy’n’pasted the example in the spec, so I can’t really see what’s wrong assuming the example is right.
In a first program, I only sent the vertex coordinates in a buffer object (color was in the reserved buffer object 0) and it worked fine apart from the decreased performance (30fps instead of 120fps). Aren’t vertex buffer objects accelerated on GeForce4MX cards ?
The data contains 640,000 bytes and I reduced the size to 6,400 but nothing was better.
I don’t think it can be an alignment problem since I’m working on a pentium and every data type is GL_FLOAT (for vertices as well as colors).
I call glGetError which obviously returned GL_NO_ERROR otherwise I wouldn’t post this thread !
Thanks in advance