VBO performance problem (driver bug?) on Geforce 4

I have a performance problem with VBO on Nvidia Geforce 4 Ti 4600, driver version 52.16.
To me it seems like a driver problem, but I don’t know.

I am rendering trees for my master thesis. I use a level-of-detail scheme with discrete levels for the trees. The problem is that the frame rate is going down dramatically after a couple of changes of the level-of-detail.

When I am going to render a particular tree in a particular level-of-detail I do the following:

  1. I allocate two large buffers, one for the indices and one for the vertices for the whole tree:

gzGenBuffers(1, &indexBuf);
gzGenBuffers(1, &vertexBuf);

gzBindBuffer(GZ_ELEMENT_ARRAY_BUFFER, indexBuf);
gzBindBuffer(GZ_ARRAY_BUFFER, vertexBuf);

gzBufferData(GZ_ELEMENT_ARRAY_BUFFER, indexOffset * sizeof(unsigned),0, GZ_STATIC_DRAW);
gzBufferData(GZ_ARRAY_BUFFER, vertexOffset * sizeof(VBOVertex), 0, GZ_STATIC_DRAW);

  1. Then I fill the buffers:

indices = static_cast<unsigned *>(gzMapBuffer(GZ_ELEMENT_ARRAY_BUFFER, GZ_WRITE_ONLY));
vertices = static_cast<VBOVertex *>(gzMapBuffer(GZ_ARRAY_BUFFER, GZ_WRITE_ONLY));

// Filling both buffers with data…

// Unmap and unbind buffers…

  1. Then I use the buffers for drawing multiple frames:

gzBindBuffer(GZ_ELEMENT_ARRAY_BUFFER, indexBuf);
gzBindBuffer(GZ_ARRAY_BUFFER, vertexBuf);

glInterleavedArrays(GZ_T2F_N3F_V3F, 0, 0);

gzDrawElements(GZ_TRIANGLE_STRIP, leafIndexOffset - stemIndexOffsets[numLevels - 1], GZ_UNSIGNED_INT, INDEX2PTR(stemIndexOffsets[numLevels - 1]));

// and some more DrawElements for other parts of the tree (totally around 10 calls for one tree)…

// unbind buffers…

When I change the level-of-detail, I deallocate the buffers:

gzDeleteBuffers(1, &indexBuf);
gzDeleteBuffers(1, &vertexBuf);

and create a new tree by starting at step 1 above.

This works fine at first, but after changing levels some times the performance is much lower.
Of some reason, I get new buffer names, not the names of the deallocated buffers.
It seems like the driver doesn’t free the buffers completely, and after a while all accelerated memory is exhausted. Can it be a driver problem or am I doing something wrong?

I have checked that the DeleteBuffers-calls are executed.

I have also timed it, and it is the DrawElements-call that takes all the extra time when it is slower, so it is the actual drawing that is the problem.

I have a demo that you can download and try: http://www.tooltech-software.com/downloads/gizmo3d/binaries/win32/tree_vbo.zip

It is starting on level-of-detail 0.
Increase level-of-detail by pressing “+” on the numeric keypad and decrease it by pressing “-” on the numeric keypad.
I can do the following to see the problem:

  1. Press “+” 4 times to change to level-of-detail 4. Then I get 12 fps.
  2. Press “-” to change to level-of-detail 3.
  3. Press “+” to change to level-of-detail 4.
  4. Press “-” to change to level-of-detail 3.
  5. Press “+” to change to level-of-detail 4. Now I just get approximately 2 fps!

(You can press space to stop the rotation, navigate with the cursor keys and “a” and “z” for forward and backward)

In the console window you can see total number of indices and vertices for one tree and the name of the buffers. You can see that the number of indices and vertices is constant for a particular level, but the buffer names is changing.

Does anybody know what to do to fix the problem, or is it a bug in the drivers?

It sound like a driver bug to me at least
But why are you deallocating the buffers? Try using the same buffers multiply times.
BTW, what API is that ‘gz’?

Ok, then I try to use the same buffer names. But I have to resize them because of the different detail levels that uses different amount of geometry. Or should I allocate one big buffer that the whole program uses (and reuses) different parts of (and do the memory management myself)?
gz is the rendering abstraction layer of the scene graph Gizmo3D (http://www.tooltech-software.com).

Thank you, that seems to solve the problem!
Now I just reuse the first 2 buffer names I get, and the fps stays high.
So when I need more buffers for different trees, I don’t use GenBuffers and DeleteBuffers when trees appear and disappear, but get a couple of buffer names with GenBuffers and then reuse them instead.
That seems indeed like a driver bug if doing the buffer name management myself solves the problem!