nVidia VBO trouble

Hi

I’m developing the new renderer for the open source 3D engine Crystal Space, and since we added VBO support we’ve had some trouble.

The code we’ve written works perfectly on ATI cards, but we’ve got problems on nVidia, both Linux and Windows, different drivers. The textures seem to be trashed when renderering (the actual textures, not texture coordinates). Adding a “glFlush” directly after our “glDrawElements” helps (although lowers performance). Running through gltrace (which only wrapps the calls and outputs a log to file) also helps. Could it be some strange timing issue?

The renderer is a but complex, so I can’t really paste a real code snippet, but I’ll post a snippet from gltrace showing what gl calls are called per object:

glBindBufferARB(GL_ARRAY_BUFFER_ARB,1686)
glVertexPointer(3,GL_FLOAT,0,0x0)
glEnableClientState(GL_VERTEX_ARRAY)
glBindBufferARB(GL_ARRAY_BUFFER_ARB,1687)
glTexCoordPointer(2,GL_FLOAT,0,0x0)
glEnableClientState(GL_TEXTURE_COORD_ARRAY)
glColorMask(TRUE,TRUE,TRUE,TRUE)
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB,1688)
glDrawElements(GL_TRIANGLES,18,GL_UNSIGNED_INT,0x0)

Everything about the renderer is strictly singlethreaded, with exception for the CPU/driver/GPU parallellism (or whatever you call it

Feels like we tried like everything. Anyone got any clues? Could it be driver failure, since it works fine on ATI?

Thanks in advance

How are you setting up your buffer 1688? Where are you specifying the buffer data?

Ah, sorry forgot that… It’s filled once (static buffer). I’ve tried using both Map/Unmap and BufferData, with no difference. Only standard types (float for data, unsigned int for indices).